00:00:00.001 Started by upstream project "autotest-per-patch" build number 127142 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24288 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.117 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.118 The recommended git tool is: git 00:00:00.118 using credential 00000000-0000-0000-0000-000000000002 00:00:00.120 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.165 Fetching changes from the remote Git repository 00:00:00.167 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.268 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.268 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/29/24129/7 # timeout=5 00:00:08.457 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.469 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.483 Checking out Revision 178f233a2a13202f6c9967830fd93e30560100d5 (FETCH_HEAD) 00:00:08.483 > git config core.sparsecheckout # timeout=10 00:00:08.495 > git read-tree -mu HEAD # timeout=10 00:00:08.514 > git checkout -f 178f233a2a13202f6c9967830fd93e30560100d5 # timeout=5 00:00:08.538 Commit message: "jenkins/jjb: remove nvme tests from distro specific jobs." 00:00:08.538 > git rev-list --no-walk c396a3cd44e4090a57fb151c18fefbf4a9bd324b # timeout=10 00:00:08.625 [Pipeline] Start of Pipeline 00:00:08.639 [Pipeline] library 00:00:08.640 Loading library shm_lib@master 00:00:08.640 Library shm_lib@master is cached. Copying from home. 00:00:08.654 [Pipeline] node 00:00:08.662 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.663 [Pipeline] { 00:00:08.672 [Pipeline] catchError 00:00:08.674 [Pipeline] { 00:00:08.684 [Pipeline] wrap 00:00:08.691 [Pipeline] { 00:00:08.698 [Pipeline] stage 00:00:08.700 [Pipeline] { (Prologue) 00:00:08.876 [Pipeline] sh 00:00:09.163 + logger -p user.info -t JENKINS-CI 00:00:09.179 [Pipeline] echo 00:00:09.180 Node: CYP9 00:00:09.187 [Pipeline] sh 00:00:09.494 [Pipeline] setCustomBuildProperty 00:00:09.504 [Pipeline] echo 00:00:09.505 Cleanup processes 00:00:09.508 [Pipeline] sh 00:00:09.828 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.828 952850 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.839 [Pipeline] sh 00:00:10.123 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.123 ++ grep -v 'sudo pgrep' 00:00:10.123 ++ awk '{print $1}' 00:00:10.123 + sudo kill -9 00:00:10.123 + true 00:00:10.139 [Pipeline] cleanWs 00:00:10.150 [WS-CLEANUP] Deleting project workspace... 00:00:10.150 [WS-CLEANUP] Deferred wipeout is used... 00:00:10.158 [WS-CLEANUP] done 00:00:10.163 [Pipeline] setCustomBuildProperty 00:00:10.179 [Pipeline] sh 00:00:10.465 + sudo git config --global --replace-all safe.directory '*' 00:00:10.549 [Pipeline] httpRequest 00:00:10.570 [Pipeline] echo 00:00:10.572 Sorcerer 10.211.164.101 is alive 00:00:10.581 [Pipeline] httpRequest 00:00:10.585 HttpMethod: GET 00:00:10.586 URL: http://10.211.164.101/packages/jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:00:10.587 Sending request to url: http://10.211.164.101/packages/jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:00:10.593 Response Code: HTTP/1.1 200 OK 00:00:10.594 Success: Status code 200 is in the accepted range: 200,404 00:00:10.594 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:00:19.142 [Pipeline] sh 00:00:19.430 + tar --no-same-owner -xf jbp_178f233a2a13202f6c9967830fd93e30560100d5.tar.gz 00:00:19.447 [Pipeline] httpRequest 00:00:19.469 [Pipeline] echo 00:00:19.471 Sorcerer 10.211.164.101 is alive 00:00:19.480 [Pipeline] httpRequest 00:00:19.485 HttpMethod: GET 00:00:19.486 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:19.486 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:19.493 Response Code: HTTP/1.1 200 OK 00:00:19.494 Success: Status code 200 is in the accepted range: 200,404 00:00:19.495 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:47.646 [Pipeline] sh 00:01:47.935 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:51.251 [Pipeline] sh 00:01:51.542 + git -C spdk log --oneline -n5 00:01:51.542 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:51.542 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:51.542 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:51.542 d005e023b raid: fix empty slot not updated in sb after resize 00:01:51.542 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:51.555 [Pipeline] } 00:01:51.573 [Pipeline] // stage 00:01:51.584 [Pipeline] stage 00:01:51.586 [Pipeline] { (Prepare) 00:01:51.601 [Pipeline] writeFile 00:01:51.614 [Pipeline] sh 00:01:51.899 + logger -p user.info -t JENKINS-CI 00:01:51.912 [Pipeline] sh 00:01:52.197 + logger -p user.info -t JENKINS-CI 00:01:52.210 [Pipeline] sh 00:01:52.515 + cat autorun-spdk.conf 00:01:52.515 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.515 SPDK_TEST_NVMF=1 00:01:52.515 SPDK_TEST_NVME_CLI=1 00:01:52.515 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.515 SPDK_TEST_NVMF_NICS=e810 00:01:52.515 SPDK_TEST_VFIOUSER=1 00:01:52.515 SPDK_RUN_UBSAN=1 00:01:52.515 NET_TYPE=phy 00:01:52.523 RUN_NIGHTLY=0 00:01:52.527 [Pipeline] readFile 00:01:52.552 [Pipeline] withEnv 00:01:52.554 [Pipeline] { 00:01:52.567 [Pipeline] sh 00:01:52.904 + set -ex 00:01:52.904 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:52.904 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:52.904 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.904 ++ SPDK_TEST_NVMF=1 00:01:52.904 ++ SPDK_TEST_NVME_CLI=1 00:01:52.904 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.904 ++ SPDK_TEST_NVMF_NICS=e810 00:01:52.905 ++ SPDK_TEST_VFIOUSER=1 00:01:52.905 ++ SPDK_RUN_UBSAN=1 00:01:52.905 ++ NET_TYPE=phy 00:01:52.905 ++ RUN_NIGHTLY=0 00:01:52.905 + case $SPDK_TEST_NVMF_NICS in 00:01:52.905 + DRIVERS=ice 00:01:52.905 + [[ tcp == \r\d\m\a ]] 00:01:52.905 + [[ -n ice ]] 00:01:52.905 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:52.905 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:52.905 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:52.905 rmmod: ERROR: Module irdma is not currently loaded 00:01:52.905 rmmod: ERROR: Module i40iw is not currently loaded 00:01:52.905 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:52.905 + true 00:01:52.905 + for D in $DRIVERS 00:01:52.905 + sudo modprobe ice 00:01:52.905 + exit 0 00:01:52.915 [Pipeline] } 00:01:52.934 [Pipeline] // withEnv 00:01:52.940 [Pipeline] } 00:01:52.957 [Pipeline] // stage 00:01:52.966 [Pipeline] catchError 00:01:52.968 [Pipeline] { 00:01:52.979 [Pipeline] timeout 00:01:52.979 Timeout set to expire in 50 min 00:01:52.980 [Pipeline] { 00:01:52.993 [Pipeline] stage 00:01:52.995 [Pipeline] { (Tests) 00:01:53.010 [Pipeline] sh 00:01:53.298 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:53.299 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:53.299 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:53.299 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:53.299 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.299 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:53.299 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:53.299 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:53.299 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:53.299 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:53.299 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:53.299 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:53.299 + source /etc/os-release 00:01:53.299 ++ NAME='Fedora Linux' 00:01:53.299 ++ VERSION='38 (Cloud Edition)' 00:01:53.299 ++ ID=fedora 00:01:53.299 ++ VERSION_ID=38 00:01:53.299 ++ VERSION_CODENAME= 00:01:53.299 ++ PLATFORM_ID=platform:f38 00:01:53.299 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:53.299 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:53.299 ++ LOGO=fedora-logo-icon 00:01:53.299 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:53.299 ++ HOME_URL=https://fedoraproject.org/ 00:01:53.299 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:53.299 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:53.299 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:53.299 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:53.299 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:53.299 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:53.299 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:53.299 ++ SUPPORT_END=2024-05-14 00:01:53.299 ++ VARIANT='Cloud Edition' 00:01:53.299 ++ VARIANT_ID=cloud 00:01:53.299 + uname -a 00:01:53.299 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:53.299 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:55.848 Hugepages 00:01:55.848 node hugesize free / total 00:01:55.848 node0 1048576kB 0 / 0 00:01:55.848 node0 2048kB 0 / 0 00:01:55.848 node1 1048576kB 0 / 0 00:01:55.848 node1 2048kB 0 / 0 00:01:55.848 00:01:55.848 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.848 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:55.848 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:55.848 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:55.848 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:55.848 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:55.848 + rm -f /tmp/spdk-ld-path 00:01:55.848 + source autorun-spdk.conf 00:01:55.848 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.848 ++ SPDK_TEST_NVMF=1 00:01:55.848 ++ SPDK_TEST_NVME_CLI=1 00:01:55.848 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.848 ++ SPDK_TEST_NVMF_NICS=e810 00:01:55.848 ++ SPDK_TEST_VFIOUSER=1 00:01:55.848 ++ SPDK_RUN_UBSAN=1 00:01:55.848 ++ NET_TYPE=phy 00:01:55.848 ++ RUN_NIGHTLY=0 00:01:55.848 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.848 + [[ -n '' ]] 00:01:55.848 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.848 + for M in /var/spdk/build-*-manifest.txt 00:01:55.848 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.848 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.848 + for M in /var/spdk/build-*-manifest.txt 00:01:55.848 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.848 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:55.848 ++ uname 00:01:55.848 + [[ Linux == \L\i\n\u\x ]] 00:01:55.848 + sudo dmesg -T 00:01:55.848 + sudo dmesg --clear 00:01:55.848 + dmesg_pid=953822 00:01:55.848 + [[ Fedora Linux == FreeBSD ]] 00:01:55.848 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.848 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.848 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.848 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:55.848 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:55.848 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.848 + sudo dmesg -Tw 00:01:55.848 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.848 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.848 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.848 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.848 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.848 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.848 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.848 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.848 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.848 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.848 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:55.848 Test configuration: 00:01:55.848 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.848 SPDK_TEST_NVMF=1 00:01:55.848 SPDK_TEST_NVME_CLI=1 00:01:55.848 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:55.848 SPDK_TEST_NVMF_NICS=e810 00:01:55.848 SPDK_TEST_VFIOUSER=1 00:01:55.848 SPDK_RUN_UBSAN=1 00:01:55.848 NET_TYPE=phy 00:01:55.848 RUN_NIGHTLY=0 09:50:34 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:55.848 09:50:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.848 09:50:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.848 09:50:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.848 09:50:34 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.848 09:50:34 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.848 09:50:34 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.848 09:50:34 -- paths/export.sh@5 -- $ export PATH 00:01:55.848 09:50:34 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.848 09:50:34 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:55.848 09:50:34 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:55.848 09:50:34 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721893834.XXXXXX 00:01:55.848 09:50:34 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721893834.KO4TKS 00:01:55.848 09:50:34 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:55.848 09:50:34 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:55.848 09:50:34 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:55.848 09:50:34 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:55.848 09:50:34 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.848 09:50:34 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:55.848 09:50:34 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:55.848 09:50:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.848 09:50:34 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:55.849 09:50:34 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:55.849 09:50:34 -- pm/common@17 -- $ local monitor 00:01:55.849 09:50:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.849 09:50:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.849 09:50:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.849 09:50:34 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.849 09:50:34 -- pm/common@21 -- $ date +%s 00:01:55.849 09:50:34 -- pm/common@25 -- $ sleep 1 00:01:55.849 09:50:34 -- pm/common@21 -- $ date +%s 00:01:55.849 09:50:34 -- pm/common@21 -- $ date +%s 00:01:55.849 09:50:34 -- pm/common@21 -- $ date +%s 00:01:55.849 09:50:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893834 00:01:55.849 09:50:34 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893834 00:01:55.849 09:50:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893834 00:01:55.849 09:50:34 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893834 00:01:55.849 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893834_collect-vmstat.pm.log 00:01:56.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893834_collect-cpu-load.pm.log 00:01:56.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893834_collect-cpu-temp.pm.log 00:01:56.110 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893834_collect-bmc-pm.bmc.pm.log 00:01:57.054 09:50:35 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:57.054 09:50:35 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.054 09:50:35 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.054 09:50:35 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.054 09:50:35 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.054 Thu Jul 25 07:50:35 AM UTC 2024 00:01:57.054 09:50:35 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.054 v24.09-pre-321-g704257090 00:01:57.054 09:50:35 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:57.054 09:50:35 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.054 09:50:35 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.054 09:50:35 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.054 09:50:35 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.054 09:50:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.054 ************************************ 00:01:57.054 START TEST ubsan 00:01:57.054 ************************************ 00:01:57.054 09:50:36 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:57.054 using ubsan 00:01:57.054 00:01:57.054 real 0m0.001s 00:01:57.054 user 0m0.000s 00:01:57.054 sys 0m0.000s 00:01:57.054 09:50:36 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:57.054 09:50:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.054 ************************************ 00:01:57.054 END TEST ubsan 00:01:57.054 ************************************ 00:01:57.054 09:50:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.054 09:50:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.054 09:50:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.054 09:50:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.054 09:50:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.054 09:50:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:57.054 09:50:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:57.054 09:50:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:57.054 09:50:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:57.316 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:57.316 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:57.576 Using 'verbs' RDMA provider 00:02:13.435 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:25.680 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:25.680 Creating mk/config.mk...done. 00:02:25.680 Creating mk/cc.flags.mk...done. 00:02:25.680 Type 'make' to build. 00:02:25.680 09:51:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:25.680 09:51:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:25.680 09:51:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:25.680 09:51:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.680 ************************************ 00:02:25.680 START TEST make 00:02:25.680 ************************************ 00:02:25.680 09:51:04 make -- common/autotest_common.sh@1125 -- $ make -j144 00:02:25.680 make[1]: Nothing to be done for 'all'. 00:02:26.620 The Meson build system 00:02:26.620 Version: 1.3.1 00:02:26.620 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:26.620 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.620 Build type: native build 00:02:26.620 Project name: libvfio-user 00:02:26.620 Project version: 0.0.1 00:02:26.620 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:26.620 C linker for the host machine: cc ld.bfd 2.39-16 00:02:26.620 Host machine cpu family: x86_64 00:02:26.620 Host machine cpu: x86_64 00:02:26.620 Run-time dependency threads found: YES 00:02:26.620 Library dl found: YES 00:02:26.620 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:26.620 Run-time dependency json-c found: YES 0.17 00:02:26.620 Run-time dependency cmocka found: YES 1.1.7 00:02:26.620 Program pytest-3 found: NO 00:02:26.620 Program flake8 found: NO 00:02:26.620 Program misspell-fixer found: NO 00:02:26.620 Program restructuredtext-lint found: NO 00:02:26.620 Program valgrind found: YES (/usr/bin/valgrind) 00:02:26.620 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.620 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.620 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.620 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:26.620 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:26.620 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:26.621 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:26.621 Build targets in project: 8 00:02:26.621 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:26.621 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:26.621 00:02:26.621 libvfio-user 0.0.1 00:02:26.621 00:02:26.621 User defined options 00:02:26.621 buildtype : debug 00:02:26.621 default_library: shared 00:02:26.621 libdir : /usr/local/lib 00:02:26.621 00:02:26.621 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.894 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:26.894 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:26.894 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:26.894 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:26.894 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:26.894 [5/37] Compiling C object samples/null.p/null.c.o 00:02:26.894 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:26.894 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:26.894 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:26.894 [9/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:26.894 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:26.894 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:26.894 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:26.894 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:26.894 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:26.894 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:26.894 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:27.183 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:27.183 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:27.183 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:27.183 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:27.183 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:27.183 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:27.183 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:27.183 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:27.183 [25/37] Compiling C object samples/client.p/client.c.o 00:02:27.183 [26/37] Compiling C object samples/server.p/server.c.o 00:02:27.183 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:27.183 [28/37] Linking target samples/client 00:02:27.183 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:27.183 [30/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:27.183 [31/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:27.183 [32/37] Linking target samples/null 00:02:27.183 [33/37] Linking target samples/gpio-pci-idio-16 00:02:27.183 [34/37] Linking target samples/lspci 00:02:27.183 [35/37] Linking target samples/server 00:02:27.183 [36/37] Linking target test/unit_tests 00:02:27.183 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:27.183 INFO: autodetecting backend as ninja 00:02:27.183 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.183 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.755 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.755 ninja: no work to do. 00:02:34.354 The Meson build system 00:02:34.354 Version: 1.3.1 00:02:34.354 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:34.354 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:34.354 Build type: native build 00:02:34.354 Program cat found: YES (/usr/bin/cat) 00:02:34.354 Project name: DPDK 00:02:34.354 Project version: 24.03.0 00:02:34.354 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:34.354 C linker for the host machine: cc ld.bfd 2.39-16 00:02:34.354 Host machine cpu family: x86_64 00:02:34.354 Host machine cpu: x86_64 00:02:34.354 Message: ## Building in Developer Mode ## 00:02:34.354 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.354 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.354 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.354 Program python3 found: YES (/usr/bin/python3) 00:02:34.354 Program cat found: YES (/usr/bin/cat) 00:02:34.354 Compiler for C supports arguments -march=native: YES 00:02:34.354 Checking for size of "void *" : 8 00:02:34.354 Checking for size of "void *" : 8 (cached) 00:02:34.354 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:34.354 Library m found: YES 00:02:34.354 Library numa found: YES 00:02:34.354 Has header "numaif.h" : YES 00:02:34.354 Library fdt found: NO 00:02:34.354 Library execinfo found: NO 00:02:34.354 Has header "execinfo.h" : YES 00:02:34.354 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:34.354 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.354 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.354 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.354 Run-time dependency openssl found: YES 3.0.9 00:02:34.354 Run-time dependency libpcap found: YES 1.10.4 00:02:34.354 Has header "pcap.h" with dependency libpcap: YES 00:02:34.354 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.354 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.354 Compiler for C supports arguments -Wformat: YES 00:02:34.354 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.354 Compiler for C supports arguments -Wformat-security: NO 00:02:34.354 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.354 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.354 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.354 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.354 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.354 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.354 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.354 Compiler for C supports arguments -Wundef: YES 00:02:34.354 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.354 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.354 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.354 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.354 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.354 Program objdump found: YES (/usr/bin/objdump) 00:02:34.354 Compiler for C supports arguments -mavx512f: YES 00:02:34.354 Checking if "AVX512 checking" compiles: YES 00:02:34.354 Fetching value of define "__SSE4_2__" : 1 00:02:34.354 Fetching value of define "__AES__" : 1 00:02:34.354 Fetching value of define "__AVX__" : 1 00:02:34.354 Fetching value of define "__AVX2__" : 1 00:02:34.354 Fetching value of define "__AVX512BW__" : 1 00:02:34.354 Fetching value of define "__AVX512CD__" : 1 00:02:34.354 Fetching value of define "__AVX512DQ__" : 1 00:02:34.354 Fetching value of define "__AVX512F__" : 1 00:02:34.354 Fetching value of define "__AVX512VL__" : 1 00:02:34.354 Fetching value of define "__PCLMUL__" : 1 00:02:34.354 Fetching value of define "__RDRND__" : 1 00:02:34.354 Fetching value of define "__RDSEED__" : 1 00:02:34.354 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:34.354 Fetching value of define "__znver1__" : (undefined) 00:02:34.354 Fetching value of define "__znver2__" : (undefined) 00:02:34.354 Fetching value of define "__znver3__" : (undefined) 00:02:34.354 Fetching value of define "__znver4__" : (undefined) 00:02:34.354 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.354 Message: lib/log: Defining dependency "log" 00:02:34.354 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.354 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.354 Checking for function "getentropy" : NO 00:02:34.354 Message: lib/eal: Defining dependency "eal" 00:02:34.354 Message: lib/ring: Defining dependency "ring" 00:02:34.354 Message: lib/rcu: Defining dependency "rcu" 00:02:34.354 Message: lib/mempool: Defining dependency "mempool" 00:02:34.354 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.354 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.354 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:34.354 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:34.354 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:34.354 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:34.354 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:34.354 Compiler for C supports arguments -mpclmul: YES 00:02:34.354 Compiler for C supports arguments -maes: YES 00:02:34.354 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.354 Compiler for C supports arguments -mavx512bw: YES 00:02:34.354 Compiler for C supports arguments -mavx512dq: YES 00:02:34.354 Compiler for C supports arguments -mavx512vl: YES 00:02:34.354 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.354 Compiler for C supports arguments -mavx2: YES 00:02:34.354 Compiler for C supports arguments -mavx: YES 00:02:34.354 Message: lib/net: Defining dependency "net" 00:02:34.354 Message: lib/meter: Defining dependency "meter" 00:02:34.354 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.354 Message: lib/pci: Defining dependency "pci" 00:02:34.354 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.354 Message: lib/hash: Defining dependency "hash" 00:02:34.354 Message: lib/timer: Defining dependency "timer" 00:02:34.354 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.354 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.354 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.354 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.354 Message: lib/power: Defining dependency "power" 00:02:34.354 Message: lib/reorder: Defining dependency "reorder" 00:02:34.355 Message: lib/security: Defining dependency "security" 00:02:34.355 Has header "linux/userfaultfd.h" : YES 00:02:34.355 Has header "linux/vduse.h" : YES 00:02:34.355 Message: lib/vhost: Defining dependency "vhost" 00:02:34.355 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.355 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.355 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.355 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.355 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.355 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.355 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.355 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.355 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.355 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.355 Program doxygen found: YES (/usr/bin/doxygen) 00:02:34.355 Configuring doxy-api-html.conf using configuration 00:02:34.355 Configuring doxy-api-man.conf using configuration 00:02:34.355 Program mandb found: YES (/usr/bin/mandb) 00:02:34.355 Program sphinx-build found: NO 00:02:34.355 Configuring rte_build_config.h using configuration 00:02:34.355 Message: 00:02:34.355 ================= 00:02:34.355 Applications Enabled 00:02:34.355 ================= 00:02:34.355 00:02:34.355 apps: 00:02:34.355 00:02:34.355 00:02:34.355 Message: 00:02:34.355 ================= 00:02:34.355 Libraries Enabled 00:02:34.355 ================= 00:02:34.355 00:02:34.355 libs: 00:02:34.355 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.355 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.355 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.355 00:02:34.355 Message: 00:02:34.355 =============== 00:02:34.355 Drivers Enabled 00:02:34.355 =============== 00:02:34.355 00:02:34.355 common: 00:02:34.355 00:02:34.355 bus: 00:02:34.355 pci, vdev, 00:02:34.355 mempool: 00:02:34.355 ring, 00:02:34.355 dma: 00:02:34.355 00:02:34.355 net: 00:02:34.355 00:02:34.355 crypto: 00:02:34.355 00:02:34.355 compress: 00:02:34.355 00:02:34.355 vdpa: 00:02:34.355 00:02:34.355 00:02:34.355 Message: 00:02:34.355 ================= 00:02:34.355 Content Skipped 00:02:34.355 ================= 00:02:34.355 00:02:34.355 apps: 00:02:34.355 dumpcap: explicitly disabled via build config 00:02:34.355 graph: explicitly disabled via build config 00:02:34.355 pdump: explicitly disabled via build config 00:02:34.355 proc-info: explicitly disabled via build config 00:02:34.355 test-acl: explicitly disabled via build config 00:02:34.355 test-bbdev: explicitly disabled via build config 00:02:34.355 test-cmdline: explicitly disabled via build config 00:02:34.355 test-compress-perf: explicitly disabled via build config 00:02:34.355 test-crypto-perf: explicitly disabled via build config 00:02:34.355 test-dma-perf: explicitly disabled via build config 00:02:34.355 test-eventdev: explicitly disabled via build config 00:02:34.355 test-fib: explicitly disabled via build config 00:02:34.355 test-flow-perf: explicitly disabled via build config 00:02:34.355 test-gpudev: explicitly disabled via build config 00:02:34.355 test-mldev: explicitly disabled via build config 00:02:34.355 test-pipeline: explicitly disabled via build config 00:02:34.355 test-pmd: explicitly disabled via build config 00:02:34.355 test-regex: explicitly disabled via build config 00:02:34.355 test-sad: explicitly disabled via build config 00:02:34.355 test-security-perf: explicitly disabled via build config 00:02:34.355 00:02:34.355 libs: 00:02:34.355 argparse: explicitly disabled via build config 00:02:34.355 metrics: explicitly disabled via build config 00:02:34.355 acl: explicitly disabled via build config 00:02:34.355 bbdev: explicitly disabled via build config 00:02:34.355 bitratestats: explicitly disabled via build config 00:02:34.355 bpf: explicitly disabled via build config 00:02:34.355 cfgfile: explicitly disabled via build config 00:02:34.355 distributor: explicitly disabled via build config 00:02:34.355 efd: explicitly disabled via build config 00:02:34.355 eventdev: explicitly disabled via build config 00:02:34.355 dispatcher: explicitly disabled via build config 00:02:34.355 gpudev: explicitly disabled via build config 00:02:34.355 gro: explicitly disabled via build config 00:02:34.355 gso: explicitly disabled via build config 00:02:34.355 ip_frag: explicitly disabled via build config 00:02:34.355 jobstats: explicitly disabled via build config 00:02:34.355 latencystats: explicitly disabled via build config 00:02:34.355 lpm: explicitly disabled via build config 00:02:34.355 member: explicitly disabled via build config 00:02:34.355 pcapng: explicitly disabled via build config 00:02:34.355 rawdev: explicitly disabled via build config 00:02:34.355 regexdev: explicitly disabled via build config 00:02:34.355 mldev: explicitly disabled via build config 00:02:34.355 rib: explicitly disabled via build config 00:02:34.355 sched: explicitly disabled via build config 00:02:34.355 stack: explicitly disabled via build config 00:02:34.355 ipsec: explicitly disabled via build config 00:02:34.355 pdcp: explicitly disabled via build config 00:02:34.355 fib: explicitly disabled via build config 00:02:34.355 port: explicitly disabled via build config 00:02:34.355 pdump: explicitly disabled via build config 00:02:34.355 table: explicitly disabled via build config 00:02:34.355 pipeline: explicitly disabled via build config 00:02:34.355 graph: explicitly disabled via build config 00:02:34.355 node: explicitly disabled via build config 00:02:34.355 00:02:34.355 drivers: 00:02:34.355 common/cpt: not in enabled drivers build config 00:02:34.355 common/dpaax: not in enabled drivers build config 00:02:34.355 common/iavf: not in enabled drivers build config 00:02:34.355 common/idpf: not in enabled drivers build config 00:02:34.355 common/ionic: not in enabled drivers build config 00:02:34.355 common/mvep: not in enabled drivers build config 00:02:34.355 common/octeontx: not in enabled drivers build config 00:02:34.355 bus/auxiliary: not in enabled drivers build config 00:02:34.355 bus/cdx: not in enabled drivers build config 00:02:34.355 bus/dpaa: not in enabled drivers build config 00:02:34.355 bus/fslmc: not in enabled drivers build config 00:02:34.355 bus/ifpga: not in enabled drivers build config 00:02:34.355 bus/platform: not in enabled drivers build config 00:02:34.355 bus/uacce: not in enabled drivers build config 00:02:34.355 bus/vmbus: not in enabled drivers build config 00:02:34.355 common/cnxk: not in enabled drivers build config 00:02:34.355 common/mlx5: not in enabled drivers build config 00:02:34.355 common/nfp: not in enabled drivers build config 00:02:34.355 common/nitrox: not in enabled drivers build config 00:02:34.355 common/qat: not in enabled drivers build config 00:02:34.355 common/sfc_efx: not in enabled drivers build config 00:02:34.355 mempool/bucket: not in enabled drivers build config 00:02:34.355 mempool/cnxk: not in enabled drivers build config 00:02:34.355 mempool/dpaa: not in enabled drivers build config 00:02:34.355 mempool/dpaa2: not in enabled drivers build config 00:02:34.355 mempool/octeontx: not in enabled drivers build config 00:02:34.355 mempool/stack: not in enabled drivers build config 00:02:34.355 dma/cnxk: not in enabled drivers build config 00:02:34.355 dma/dpaa: not in enabled drivers build config 00:02:34.355 dma/dpaa2: not in enabled drivers build config 00:02:34.355 dma/hisilicon: not in enabled drivers build config 00:02:34.355 dma/idxd: not in enabled drivers build config 00:02:34.355 dma/ioat: not in enabled drivers build config 00:02:34.355 dma/skeleton: not in enabled drivers build config 00:02:34.355 net/af_packet: not in enabled drivers build config 00:02:34.355 net/af_xdp: not in enabled drivers build config 00:02:34.355 net/ark: not in enabled drivers build config 00:02:34.355 net/atlantic: not in enabled drivers build config 00:02:34.355 net/avp: not in enabled drivers build config 00:02:34.355 net/axgbe: not in enabled drivers build config 00:02:34.355 net/bnx2x: not in enabled drivers build config 00:02:34.355 net/bnxt: not in enabled drivers build config 00:02:34.355 net/bonding: not in enabled drivers build config 00:02:34.355 net/cnxk: not in enabled drivers build config 00:02:34.355 net/cpfl: not in enabled drivers build config 00:02:34.355 net/cxgbe: not in enabled drivers build config 00:02:34.355 net/dpaa: not in enabled drivers build config 00:02:34.355 net/dpaa2: not in enabled drivers build config 00:02:34.355 net/e1000: not in enabled drivers build config 00:02:34.355 net/ena: not in enabled drivers build config 00:02:34.355 net/enetc: not in enabled drivers build config 00:02:34.355 net/enetfec: not in enabled drivers build config 00:02:34.355 net/enic: not in enabled drivers build config 00:02:34.355 net/failsafe: not in enabled drivers build config 00:02:34.355 net/fm10k: not in enabled drivers build config 00:02:34.355 net/gve: not in enabled drivers build config 00:02:34.355 net/hinic: not in enabled drivers build config 00:02:34.355 net/hns3: not in enabled drivers build config 00:02:34.355 net/i40e: not in enabled drivers build config 00:02:34.355 net/iavf: not in enabled drivers build config 00:02:34.355 net/ice: not in enabled drivers build config 00:02:34.355 net/idpf: not in enabled drivers build config 00:02:34.355 net/igc: not in enabled drivers build config 00:02:34.355 net/ionic: not in enabled drivers build config 00:02:34.355 net/ipn3ke: not in enabled drivers build config 00:02:34.355 net/ixgbe: not in enabled drivers build config 00:02:34.355 net/mana: not in enabled drivers build config 00:02:34.355 net/memif: not in enabled drivers build config 00:02:34.355 net/mlx4: not in enabled drivers build config 00:02:34.355 net/mlx5: not in enabled drivers build config 00:02:34.355 net/mvneta: not in enabled drivers build config 00:02:34.355 net/mvpp2: not in enabled drivers build config 00:02:34.355 net/netvsc: not in enabled drivers build config 00:02:34.355 net/nfb: not in enabled drivers build config 00:02:34.355 net/nfp: not in enabled drivers build config 00:02:34.355 net/ngbe: not in enabled drivers build config 00:02:34.356 net/null: not in enabled drivers build config 00:02:34.356 net/octeontx: not in enabled drivers build config 00:02:34.356 net/octeon_ep: not in enabled drivers build config 00:02:34.356 net/pcap: not in enabled drivers build config 00:02:34.356 net/pfe: not in enabled drivers build config 00:02:34.356 net/qede: not in enabled drivers build config 00:02:34.356 net/ring: not in enabled drivers build config 00:02:34.356 net/sfc: not in enabled drivers build config 00:02:34.356 net/softnic: not in enabled drivers build config 00:02:34.356 net/tap: not in enabled drivers build config 00:02:34.356 net/thunderx: not in enabled drivers build config 00:02:34.356 net/txgbe: not in enabled drivers build config 00:02:34.356 net/vdev_netvsc: not in enabled drivers build config 00:02:34.356 net/vhost: not in enabled drivers build config 00:02:34.356 net/virtio: not in enabled drivers build config 00:02:34.356 net/vmxnet3: not in enabled drivers build config 00:02:34.356 raw/*: missing internal dependency, "rawdev" 00:02:34.356 crypto/armv8: not in enabled drivers build config 00:02:34.356 crypto/bcmfs: not in enabled drivers build config 00:02:34.356 crypto/caam_jr: not in enabled drivers build config 00:02:34.356 crypto/ccp: not in enabled drivers build config 00:02:34.356 crypto/cnxk: not in enabled drivers build config 00:02:34.356 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.356 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.356 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.356 crypto/mlx5: not in enabled drivers build config 00:02:34.356 crypto/mvsam: not in enabled drivers build config 00:02:34.356 crypto/nitrox: not in enabled drivers build config 00:02:34.356 crypto/null: not in enabled drivers build config 00:02:34.356 crypto/octeontx: not in enabled drivers build config 00:02:34.356 crypto/openssl: not in enabled drivers build config 00:02:34.356 crypto/scheduler: not in enabled drivers build config 00:02:34.356 crypto/uadk: not in enabled drivers build config 00:02:34.356 crypto/virtio: not in enabled drivers build config 00:02:34.356 compress/isal: not in enabled drivers build config 00:02:34.356 compress/mlx5: not in enabled drivers build config 00:02:34.356 compress/nitrox: not in enabled drivers build config 00:02:34.356 compress/octeontx: not in enabled drivers build config 00:02:34.356 compress/zlib: not in enabled drivers build config 00:02:34.356 regex/*: missing internal dependency, "regexdev" 00:02:34.356 ml/*: missing internal dependency, "mldev" 00:02:34.356 vdpa/ifc: not in enabled drivers build config 00:02:34.356 vdpa/mlx5: not in enabled drivers build config 00:02:34.356 vdpa/nfp: not in enabled drivers build config 00:02:34.356 vdpa/sfc: not in enabled drivers build config 00:02:34.356 event/*: missing internal dependency, "eventdev" 00:02:34.356 baseband/*: missing internal dependency, "bbdev" 00:02:34.356 gpu/*: missing internal dependency, "gpudev" 00:02:34.356 00:02:34.356 00:02:34.356 Build targets in project: 84 00:02:34.356 00:02:34.356 DPDK 24.03.0 00:02:34.356 00:02:34.356 User defined options 00:02:34.356 buildtype : debug 00:02:34.356 default_library : shared 00:02:34.356 libdir : lib 00:02:34.356 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:34.356 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.356 c_link_args : 00:02:34.356 cpu_instruction_set: native 00:02:34.356 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.356 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.356 enable_docs : false 00:02:34.356 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.356 enable_kmods : false 00:02:34.356 max_lcores : 128 00:02:34.356 tests : false 00:02:34.356 00:02:34.356 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:34.356 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:34.356 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:34.356 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:34.356 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:34.356 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:34.356 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:34.356 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:34.356 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:34.356 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:34.356 [9/267] Linking static target lib/librte_kvargs.a 00:02:34.356 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:34.356 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:34.356 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:34.356 [13/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:34.615 [14/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:34.615 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:34.615 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:34.615 [17/267] Linking static target lib/librte_log.a 00:02:34.615 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:34.615 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:34.615 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.615 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.615 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.615 [23/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:34.615 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.615 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.615 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.615 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.615 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.615 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.615 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.615 [31/267] Linking static target lib/librte_pci.a 00:02:34.615 [32/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.615 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.615 [34/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.615 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.615 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:34.615 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:34.615 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:34.887 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:34.887 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:34.887 [41/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.887 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:34.887 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.887 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:34.887 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.887 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:34.887 [47/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.887 [48/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:34.887 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:34.887 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:34.887 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:34.887 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:34.887 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:34.887 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:34.887 [55/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:34.887 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:34.887 [57/267] Linking static target lib/librte_telemetry.a 00:02:34.887 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:34.887 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:34.887 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:34.887 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:34.887 [62/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:34.887 [63/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:34.887 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:34.887 [65/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.887 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.887 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:34.887 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.887 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.887 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:34.887 [71/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:34.887 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.887 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:34.887 [74/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.887 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:34.887 [76/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:34.887 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:34.887 [78/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:34.887 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:34.887 [80/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.887 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:34.887 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:34.887 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:34.887 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:34.887 [85/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:34.887 [86/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:34.887 [87/267] Linking static target lib/librte_meter.a 00:02:34.887 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:34.887 [89/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:34.887 [90/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:34.887 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:34.887 [92/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:34.887 [93/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:34.887 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.887 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:34.887 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.887 [97/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.887 [98/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:35.153 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:35.153 [100/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.153 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:35.153 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.153 [103/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.153 [104/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:35.153 [105/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:35.153 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:35.153 [107/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:35.153 [108/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.153 [109/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.153 [110/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.153 [111/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:35.153 [112/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:35.153 [113/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.153 [114/267] Linking static target lib/librte_timer.a 00:02:35.153 [115/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:35.153 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.153 [117/267] Linking static target lib/librte_ring.a 00:02:35.153 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.153 [119/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:35.153 [120/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.153 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:35.153 [122/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.153 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:35.153 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:35.153 [125/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.153 [126/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:35.153 [127/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.153 [128/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.153 [129/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.153 [130/267] Linking static target lib/librte_mempool.a 00:02:35.153 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.153 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.153 [133/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.153 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:35.153 [135/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.153 [136/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:35.153 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.153 [138/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.153 [139/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.153 [140/267] Linking static target lib/librte_cmdline.a 00:02:35.153 [141/267] Linking static target lib/librte_power.a 00:02:35.153 [142/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:35.153 [143/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.153 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:35.153 [145/267] Linking static target lib/librte_net.a 00:02:35.153 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:35.153 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.153 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.153 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.153 [150/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.153 [151/267] Linking target lib/librte_log.so.24.1 00:02:35.153 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:35.153 [153/267] Linking static target lib/librte_rcu.a 00:02:35.153 [154/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:35.153 [155/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.153 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:35.154 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:35.154 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.154 [159/267] Linking static target lib/librte_dmadev.a 00:02:35.154 [160/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:35.154 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:35.154 [162/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:35.154 [163/267] Linking static target lib/librte_reorder.a 00:02:35.154 [164/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:35.154 [165/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:35.154 [166/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.154 [167/267] Linking static target lib/librte_compressdev.a 00:02:35.154 [168/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.154 [169/267] Linking static target lib/librte_eal.a 00:02:35.154 [170/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:35.154 [171/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:35.154 [172/267] Linking static target drivers/librte_bus_vdev.a 00:02:35.154 [173/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.154 [174/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:35.154 [175/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:35.154 [176/267] Linking static target lib/librte_security.a 00:02:35.154 [177/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:35.154 [178/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:35.154 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:35.154 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.154 [181/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.154 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:35.154 [183/267] Linking static target lib/librte_mbuf.a 00:02:35.154 [184/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:35.416 [185/267] Linking target lib/librte_kvargs.so.24.1 00:02:35.416 [186/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:35.416 [187/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:35.416 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.416 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:35.416 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:35.416 [191/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.416 [192/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:35.416 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.416 [194/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:35.416 [195/267] Linking static target lib/librte_cryptodev.a 00:02:35.416 [196/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.416 [197/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:35.416 [198/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.416 [199/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:35.416 [200/267] Linking static target lib/librte_hash.a 00:02:35.416 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.416 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:35.416 [203/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.416 [204/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:35.416 [205/267] Linking static target drivers/librte_bus_pci.a 00:02:35.416 [206/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.416 [207/267] Linking static target drivers/librte_mempool_ring.a 00:02:35.416 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.678 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.678 [210/267] Linking target lib/librte_telemetry.so.24.1 00:02:35.678 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.678 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.678 [213/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.678 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:35.939 [215/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:35.939 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.939 [217/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.939 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.939 [219/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.939 [220/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.201 [221/267] Linking static target lib/librte_ethdev.a 00:02:36.201 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.201 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.201 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.461 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.461 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.034 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:37.034 [228/267] Linking static target lib/librte_vhost.a 00:02:37.607 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.526 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.118 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.061 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.061 [233/267] Linking target lib/librte_eal.so.24.1 00:02:47.322 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:47.322 [235/267] Linking target lib/librte_pci.so.24.1 00:02:47.322 [236/267] Linking target lib/librte_ring.so.24.1 00:02:47.322 [237/267] Linking target lib/librte_timer.so.24.1 00:02:47.322 [238/267] Linking target lib/librte_meter.so.24.1 00:02:47.322 [239/267] Linking target lib/librte_dmadev.so.24.1 00:02:47.322 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:47.322 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:47.322 [242/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:47.322 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:47.322 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:47.322 [245/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:47.322 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:47.584 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:47.584 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:47.584 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:47.584 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:47.584 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:47.584 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:47.584 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:47.844 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:47.844 [255/267] Linking target lib/librte_net.so.24.1 00:02:47.844 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:47.844 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:47.844 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:47.844 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:47.844 [260/267] Linking target lib/librte_cmdline.so.24.1 00:02:47.844 [261/267] Linking target lib/librte_hash.so.24.1 00:02:47.844 [262/267] Linking target lib/librte_security.so.24.1 00:02:47.844 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:48.106 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:48.106 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:48.106 [266/267] Linking target lib/librte_power.so.24.1 00:02:48.106 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:48.106 INFO: autodetecting backend as ninja 00:02:48.106 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:49.496 CC lib/ut/ut.o 00:02:49.496 CC lib/log/log.o 00:02:49.496 CC lib/log/log_deprecated.o 00:02:49.496 CC lib/log/log_flags.o 00:02:49.496 CC lib/ut_mock/mock.o 00:02:49.496 LIB libspdk_ut.a 00:02:49.496 LIB libspdk_log.a 00:02:49.496 LIB libspdk_ut_mock.a 00:02:49.496 SO libspdk_ut.so.2.0 00:02:49.496 SO libspdk_log.so.7.0 00:02:49.496 SO libspdk_ut_mock.so.6.0 00:02:49.496 SYMLINK libspdk_ut.so 00:02:49.496 SYMLINK libspdk_log.so 00:02:49.496 SYMLINK libspdk_ut_mock.so 00:02:50.068 CC lib/ioat/ioat.o 00:02:50.068 CC lib/dma/dma.o 00:02:50.068 CXX lib/trace_parser/trace.o 00:02:50.068 CC lib/util/base64.o 00:02:50.068 CC lib/util/bit_array.o 00:02:50.068 CC lib/util/cpuset.o 00:02:50.068 CC lib/util/crc16.o 00:02:50.068 CC lib/util/crc32.o 00:02:50.068 CC lib/util/crc32c.o 00:02:50.068 CC lib/util/dif.o 00:02:50.068 CC lib/util/crc32_ieee.o 00:02:50.068 CC lib/util/crc64.o 00:02:50.068 CC lib/util/fd.o 00:02:50.068 CC lib/util/fd_group.o 00:02:50.068 CC lib/util/file.o 00:02:50.068 CC lib/util/hexlify.o 00:02:50.068 CC lib/util/iov.o 00:02:50.068 CC lib/util/math.o 00:02:50.068 CC lib/util/pipe.o 00:02:50.068 CC lib/util/net.o 00:02:50.068 CC lib/util/strerror_tls.o 00:02:50.068 CC lib/util/string.o 00:02:50.068 CC lib/util/uuid.o 00:02:50.068 CC lib/util/xor.o 00:02:50.068 CC lib/util/zipf.o 00:02:50.068 CC lib/vfio_user/host/vfio_user.o 00:02:50.068 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.068 LIB libspdk_dma.a 00:02:50.068 SO libspdk_dma.so.4.0 00:02:50.399 LIB libspdk_ioat.a 00:02:50.399 SO libspdk_ioat.so.7.0 00:02:50.399 SYMLINK libspdk_dma.so 00:02:50.399 SYMLINK libspdk_ioat.so 00:02:50.399 LIB libspdk_vfio_user.a 00:02:50.399 SO libspdk_vfio_user.so.5.0 00:02:50.399 LIB libspdk_util.a 00:02:50.399 SYMLINK libspdk_vfio_user.so 00:02:50.399 SO libspdk_util.so.10.0 00:02:50.662 SYMLINK libspdk_util.so 00:02:50.662 LIB libspdk_trace_parser.a 00:02:50.924 SO libspdk_trace_parser.so.5.0 00:02:50.924 SYMLINK libspdk_trace_parser.so 00:02:50.924 CC lib/vmd/vmd.o 00:02:50.924 CC lib/vmd/led.o 00:02:50.924 CC lib/idxd/idxd.o 00:02:50.924 CC lib/idxd/idxd_user.o 00:02:50.924 CC lib/idxd/idxd_kernel.o 00:02:50.924 CC lib/rdma_utils/rdma_utils.o 00:02:50.924 CC lib/rdma_provider/common.o 00:02:50.924 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:50.924 CC lib/conf/conf.o 00:02:50.924 CC lib/json/json_parse.o 00:02:50.924 CC lib/json/json_util.o 00:02:50.924 CC lib/json/json_write.o 00:02:50.924 CC lib/env_dpdk/env.o 00:02:50.924 CC lib/env_dpdk/memory.o 00:02:50.924 CC lib/env_dpdk/pci.o 00:02:50.924 CC lib/env_dpdk/init.o 00:02:50.924 CC lib/env_dpdk/threads.o 00:02:50.924 CC lib/env_dpdk/pci_ioat.o 00:02:50.924 CC lib/env_dpdk/pci_virtio.o 00:02:50.924 CC lib/env_dpdk/pci_vmd.o 00:02:50.924 CC lib/env_dpdk/pci_idxd.o 00:02:50.924 CC lib/env_dpdk/pci_event.o 00:02:50.924 CC lib/env_dpdk/sigbus_handler.o 00:02:50.924 CC lib/env_dpdk/pci_dpdk.o 00:02:50.924 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:50.924 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:51.186 LIB libspdk_rdma_provider.a 00:02:51.186 SO libspdk_rdma_provider.so.6.0 00:02:51.186 LIB libspdk_conf.a 00:02:51.448 LIB libspdk_rdma_utils.a 00:02:51.448 SO libspdk_conf.so.6.0 00:02:51.448 SYMLINK libspdk_rdma_provider.so 00:02:51.448 LIB libspdk_json.a 00:02:51.448 SO libspdk_rdma_utils.so.1.0 00:02:51.448 SYMLINK libspdk_conf.so 00:02:51.448 SO libspdk_json.so.6.0 00:02:51.448 SYMLINK libspdk_rdma_utils.so 00:02:51.448 SYMLINK libspdk_json.so 00:02:51.448 LIB libspdk_idxd.a 00:02:51.710 SO libspdk_idxd.so.12.0 00:02:51.710 LIB libspdk_vmd.a 00:02:51.710 SO libspdk_vmd.so.6.0 00:02:51.710 SYMLINK libspdk_idxd.so 00:02:51.710 SYMLINK libspdk_vmd.so 00:02:51.971 CC lib/jsonrpc/jsonrpc_server.o 00:02:51.971 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:51.971 CC lib/jsonrpc/jsonrpc_client.o 00:02:51.971 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:51.971 LIB libspdk_jsonrpc.a 00:02:52.232 SO libspdk_jsonrpc.so.6.0 00:02:52.232 SYMLINK libspdk_jsonrpc.so 00:02:52.232 LIB libspdk_env_dpdk.a 00:02:52.232 SO libspdk_env_dpdk.so.15.0 00:02:52.492 SYMLINK libspdk_env_dpdk.so 00:02:52.492 CC lib/rpc/rpc.o 00:02:52.753 LIB libspdk_rpc.a 00:02:52.753 SO libspdk_rpc.so.6.0 00:02:53.014 SYMLINK libspdk_rpc.so 00:02:53.275 CC lib/keyring/keyring_rpc.o 00:02:53.275 CC lib/keyring/keyring.o 00:02:53.275 CC lib/trace/trace.o 00:02:53.275 CC lib/notify/notify_rpc.o 00:02:53.275 CC lib/notify/notify.o 00:02:53.275 CC lib/trace/trace_flags.o 00:02:53.275 CC lib/trace/trace_rpc.o 00:02:53.536 LIB libspdk_notify.a 00:02:53.536 LIB libspdk_keyring.a 00:02:53.536 SO libspdk_notify.so.6.0 00:02:53.536 SO libspdk_keyring.so.1.0 00:02:53.536 LIB libspdk_trace.a 00:02:53.536 SYMLINK libspdk_notify.so 00:02:53.536 SYMLINK libspdk_keyring.so 00:02:53.536 SO libspdk_trace.so.10.0 00:02:53.536 SYMLINK libspdk_trace.so 00:02:54.109 CC lib/sock/sock_rpc.o 00:02:54.109 CC lib/sock/sock.o 00:02:54.109 CC lib/thread/thread.o 00:02:54.109 CC lib/thread/iobuf.o 00:02:54.377 LIB libspdk_sock.a 00:02:54.377 SO libspdk_sock.so.10.0 00:02:54.377 SYMLINK libspdk_sock.so 00:02:54.954 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.954 CC lib/nvme/nvme_ctrlr.o 00:02:54.954 CC lib/nvme/nvme_fabric.o 00:02:54.954 CC lib/nvme/nvme_ns_cmd.o 00:02:54.954 CC lib/nvme/nvme_ns.o 00:02:54.954 CC lib/nvme/nvme_pcie_common.o 00:02:54.954 CC lib/nvme/nvme_pcie.o 00:02:54.954 CC lib/nvme/nvme_qpair.o 00:02:54.954 CC lib/nvme/nvme.o 00:02:54.954 CC lib/nvme/nvme_quirks.o 00:02:54.954 CC lib/nvme/nvme_transport.o 00:02:54.954 CC lib/nvme/nvme_discovery.o 00:02:54.954 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.954 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.954 CC lib/nvme/nvme_tcp.o 00:02:54.954 CC lib/nvme/nvme_opal.o 00:02:54.954 CC lib/nvme/nvme_io_msg.o 00:02:54.954 CC lib/nvme/nvme_poll_group.o 00:02:54.954 CC lib/nvme/nvme_zns.o 00:02:54.954 CC lib/nvme/nvme_stubs.o 00:02:54.954 CC lib/nvme/nvme_auth.o 00:02:54.954 CC lib/nvme/nvme_cuse.o 00:02:54.954 CC lib/nvme/nvme_rdma.o 00:02:54.954 CC lib/nvme/nvme_vfio_user.o 00:02:55.215 LIB libspdk_thread.a 00:02:55.215 SO libspdk_thread.so.10.1 00:02:55.215 SYMLINK libspdk_thread.so 00:02:55.787 CC lib/blob/request.o 00:02:55.787 CC lib/blob/blobstore.o 00:02:55.787 CC lib/blob/zeroes.o 00:02:55.787 CC lib/init/json_config.o 00:02:55.787 CC lib/blob/blob_bs_dev.o 00:02:55.787 CC lib/init/subsystem.o 00:02:55.787 CC lib/init/subsystem_rpc.o 00:02:55.787 CC lib/init/rpc.o 00:02:55.787 CC lib/virtio/virtio.o 00:02:55.787 CC lib/virtio/virtio_vhost_user.o 00:02:55.787 CC lib/virtio/virtio_vfio_user.o 00:02:55.787 CC lib/virtio/virtio_pci.o 00:02:55.787 CC lib/accel/accel.o 00:02:55.787 CC lib/accel/accel_rpc.o 00:02:55.787 CC lib/accel/accel_sw.o 00:02:55.787 CC lib/vfu_tgt/tgt_endpoint.o 00:02:55.787 CC lib/vfu_tgt/tgt_rpc.o 00:02:55.787 LIB libspdk_init.a 00:02:56.047 SO libspdk_init.so.5.0 00:02:56.047 LIB libspdk_virtio.a 00:02:56.047 LIB libspdk_vfu_tgt.a 00:02:56.047 SO libspdk_vfu_tgt.so.3.0 00:02:56.047 SO libspdk_virtio.so.7.0 00:02:56.047 SYMLINK libspdk_init.so 00:02:56.047 SYMLINK libspdk_vfu_tgt.so 00:02:56.047 SYMLINK libspdk_virtio.so 00:02:56.308 CC lib/event/app.o 00:02:56.308 CC lib/event/reactor.o 00:02:56.308 CC lib/event/log_rpc.o 00:02:56.308 CC lib/event/app_rpc.o 00:02:56.308 CC lib/event/scheduler_static.o 00:02:56.570 LIB libspdk_accel.a 00:02:56.570 SO libspdk_accel.so.16.0 00:02:56.570 LIB libspdk_nvme.a 00:02:56.570 SYMLINK libspdk_accel.so 00:02:56.832 SO libspdk_nvme.so.13.1 00:02:56.832 LIB libspdk_event.a 00:02:56.832 SO libspdk_event.so.14.0 00:02:56.832 SYMLINK libspdk_event.so 00:02:57.094 CC lib/bdev/bdev.o 00:02:57.094 CC lib/bdev/bdev_rpc.o 00:02:57.094 CC lib/bdev/bdev_zone.o 00:02:57.094 CC lib/bdev/part.o 00:02:57.094 CC lib/bdev/scsi_nvme.o 00:02:57.094 SYMLINK libspdk_nvme.so 00:02:58.039 LIB libspdk_blob.a 00:02:58.300 SO libspdk_blob.so.11.0 00:02:58.300 SYMLINK libspdk_blob.so 00:02:58.562 CC lib/blobfs/blobfs.o 00:02:58.562 CC lib/blobfs/tree.o 00:02:58.562 CC lib/lvol/lvol.o 00:02:59.136 LIB libspdk_bdev.a 00:02:59.136 SO libspdk_bdev.so.16.0 00:02:59.136 SYMLINK libspdk_bdev.so 00:02:59.397 LIB libspdk_blobfs.a 00:02:59.397 SO libspdk_blobfs.so.10.0 00:02:59.397 LIB libspdk_lvol.a 00:02:59.397 SO libspdk_lvol.so.10.0 00:02:59.397 SYMLINK libspdk_blobfs.so 00:02:59.659 SYMLINK libspdk_lvol.so 00:02:59.659 CC lib/ftl/ftl_core.o 00:02:59.659 CC lib/ftl/ftl_init.o 00:02:59.659 CC lib/ftl/ftl_layout.o 00:02:59.659 CC lib/ftl/ftl_debug.o 00:02:59.659 CC lib/ftl/ftl_io.o 00:02:59.659 CC lib/ftl/ftl_sb.o 00:02:59.659 CC lib/ftl/ftl_l2p.o 00:02:59.659 CC lib/ftl/ftl_l2p_flat.o 00:02:59.659 CC lib/ftl/ftl_nv_cache.o 00:02:59.659 CC lib/nbd/nbd.o 00:02:59.659 CC lib/ftl/ftl_band.o 00:02:59.659 CC lib/nbd/nbd_rpc.o 00:02:59.659 CC lib/ftl/ftl_band_ops.o 00:02:59.659 CC lib/ftl/ftl_writer.o 00:02:59.659 CC lib/ftl/ftl_rq.o 00:02:59.659 CC lib/ftl/ftl_reloc.o 00:02:59.659 CC lib/ftl/ftl_l2p_cache.o 00:02:59.659 CC lib/ftl/ftl_p2l.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:59.659 CC lib/scsi/dev.o 00:02:59.659 CC lib/scsi/lun.o 00:02:59.659 CC lib/ublk/ublk.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:59.659 CC lib/scsi/port.o 00:02:59.659 CC lib/scsi/scsi.o 00:02:59.659 CC lib/nvmf/ctrlr_discovery.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:59.659 CC lib/ublk/ublk_rpc.o 00:02:59.659 CC lib/nvmf/ctrlr.o 00:02:59.659 CC lib/scsi/scsi_bdev.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:59.659 CC lib/nvmf/ctrlr_bdev.o 00:02:59.659 CC lib/scsi/scsi_pr.o 00:02:59.659 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:59.659 CC lib/nvmf/nvmf_rpc.o 00:02:59.659 CC lib/scsi/scsi_rpc.o 00:02:59.659 CC lib/scsi/task.o 00:02:59.659 CC lib/ftl/utils/ftl_conf.o 00:02:59.659 CC lib/nvmf/subsystem.o 00:02:59.659 CC lib/ftl/utils/ftl_md.o 00:02:59.659 CC lib/nvmf/nvmf.o 00:02:59.659 CC lib/ftl/utils/ftl_mempool.o 00:02:59.659 CC lib/ftl/utils/ftl_bitmap.o 00:02:59.659 CC lib/nvmf/transport.o 00:02:59.659 CC lib/ftl/utils/ftl_property.o 00:02:59.659 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:59.659 CC lib/nvmf/tcp.o 00:02:59.659 CC lib/nvmf/stubs.o 00:02:59.659 CC lib/nvmf/mdns_server.o 00:02:59.659 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:59.659 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:59.659 CC lib/nvmf/vfio_user.o 00:02:59.659 CC lib/nvmf/rdma.o 00:02:59.659 CC lib/nvmf/auth.o 00:02:59.659 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:59.659 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:59.659 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:59.659 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:59.659 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:59.659 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:59.659 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:59.659 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:59.659 CC lib/ftl/base/ftl_base_dev.o 00:02:59.659 CC lib/ftl/base/ftl_base_bdev.o 00:02:59.659 CC lib/ftl/ftl_trace.o 00:03:00.229 LIB libspdk_nbd.a 00:03:00.229 SO libspdk_nbd.so.7.0 00:03:00.229 SYMLINK libspdk_nbd.so 00:03:00.229 LIB libspdk_scsi.a 00:03:00.229 LIB libspdk_ublk.a 00:03:00.229 SO libspdk_scsi.so.9.0 00:03:00.229 SO libspdk_ublk.so.3.0 00:03:00.488 SYMLINK libspdk_ublk.so 00:03:00.488 SYMLINK libspdk_scsi.so 00:03:00.488 LIB libspdk_ftl.a 00:03:00.750 SO libspdk_ftl.so.9.0 00:03:00.750 CC lib/vhost/vhost.o 00:03:00.750 CC lib/vhost/vhost_rpc.o 00:03:00.750 CC lib/vhost/vhost_scsi.o 00:03:00.750 CC lib/vhost/vhost_blk.o 00:03:00.750 CC lib/vhost/rte_vhost_user.o 00:03:00.750 CC lib/iscsi/conn.o 00:03:00.750 CC lib/iscsi/init_grp.o 00:03:00.750 CC lib/iscsi/iscsi.o 00:03:00.750 CC lib/iscsi/md5.o 00:03:00.750 CC lib/iscsi/param.o 00:03:00.750 CC lib/iscsi/portal_grp.o 00:03:00.750 CC lib/iscsi/tgt_node.o 00:03:00.750 CC lib/iscsi/iscsi_subsystem.o 00:03:00.750 CC lib/iscsi/iscsi_rpc.o 00:03:00.750 CC lib/iscsi/task.o 00:03:01.011 SYMLINK libspdk_ftl.so 00:03:01.585 LIB libspdk_nvmf.a 00:03:01.585 SO libspdk_nvmf.so.19.0 00:03:01.585 LIB libspdk_vhost.a 00:03:01.585 SYMLINK libspdk_nvmf.so 00:03:01.846 SO libspdk_vhost.so.8.0 00:03:01.846 SYMLINK libspdk_vhost.so 00:03:01.846 LIB libspdk_iscsi.a 00:03:02.108 SO libspdk_iscsi.so.8.0 00:03:02.108 SYMLINK libspdk_iscsi.so 00:03:02.680 CC module/vfu_device/vfu_virtio.o 00:03:02.681 CC module/vfu_device/vfu_virtio_blk.o 00:03:02.681 CC module/vfu_device/vfu_virtio_scsi.o 00:03:02.681 CC module/vfu_device/vfu_virtio_rpc.o 00:03:02.681 CC module/env_dpdk/env_dpdk_rpc.o 00:03:02.942 CC module/sock/posix/posix.o 00:03:02.942 LIB libspdk_env_dpdk_rpc.a 00:03:02.942 CC module/accel/dsa/accel_dsa_rpc.o 00:03:02.942 CC module/accel/dsa/accel_dsa.o 00:03:02.942 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:02.942 CC module/keyring/linux/keyring.o 00:03:02.942 CC module/keyring/linux/keyring_rpc.o 00:03:02.942 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:02.942 CC module/accel/error/accel_error.o 00:03:02.942 CC module/keyring/file/keyring.o 00:03:02.942 CC module/accel/error/accel_error_rpc.o 00:03:02.942 CC module/keyring/file/keyring_rpc.o 00:03:02.942 CC module/scheduler/gscheduler/gscheduler.o 00:03:02.942 CC module/accel/iaa/accel_iaa.o 00:03:02.942 CC module/accel/ioat/accel_ioat.o 00:03:02.942 CC module/blob/bdev/blob_bdev.o 00:03:02.942 CC module/accel/iaa/accel_iaa_rpc.o 00:03:02.942 CC module/accel/ioat/accel_ioat_rpc.o 00:03:02.942 SO libspdk_env_dpdk_rpc.so.6.0 00:03:02.942 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.203 LIB libspdk_keyring_linux.a 00:03:03.203 LIB libspdk_keyring_file.a 00:03:03.203 LIB libspdk_scheduler_gscheduler.a 00:03:03.203 LIB libspdk_scheduler_dpdk_governor.a 00:03:03.203 LIB libspdk_scheduler_dynamic.a 00:03:03.203 SO libspdk_keyring_file.so.1.0 00:03:03.203 LIB libspdk_accel_error.a 00:03:03.203 SO libspdk_keyring_linux.so.1.0 00:03:03.203 SO libspdk_scheduler_gscheduler.so.4.0 00:03:03.203 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:03.203 LIB libspdk_accel_ioat.a 00:03:03.203 LIB libspdk_accel_iaa.a 00:03:03.203 SO libspdk_scheduler_dynamic.so.4.0 00:03:03.203 LIB libspdk_accel_dsa.a 00:03:03.203 SO libspdk_accel_error.so.2.0 00:03:03.203 LIB libspdk_blob_bdev.a 00:03:03.203 SO libspdk_accel_ioat.so.6.0 00:03:03.203 SO libspdk_accel_iaa.so.3.0 00:03:03.203 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:03.203 SYMLINK libspdk_keyring_file.so 00:03:03.203 SYMLINK libspdk_scheduler_gscheduler.so 00:03:03.203 SO libspdk_accel_dsa.so.5.0 00:03:03.203 SYMLINK libspdk_keyring_linux.so 00:03:03.203 SYMLINK libspdk_scheduler_dynamic.so 00:03:03.203 SO libspdk_blob_bdev.so.11.0 00:03:03.203 SYMLINK libspdk_accel_error.so 00:03:03.203 LIB libspdk_vfu_device.a 00:03:03.203 SYMLINK libspdk_accel_ioat.so 00:03:03.203 SYMLINK libspdk_accel_iaa.so 00:03:03.203 SYMLINK libspdk_accel_dsa.so 00:03:03.203 SYMLINK libspdk_blob_bdev.so 00:03:03.203 SO libspdk_vfu_device.so.3.0 00:03:03.466 SYMLINK libspdk_vfu_device.so 00:03:03.466 LIB libspdk_sock_posix.a 00:03:03.729 SO libspdk_sock_posix.so.6.0 00:03:03.729 SYMLINK libspdk_sock_posix.so 00:03:03.729 CC module/bdev/error/vbdev_error.o 00:03:03.729 CC module/bdev/error/vbdev_error_rpc.o 00:03:03.729 CC module/bdev/gpt/gpt.o 00:03:04.006 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.006 CC module/bdev/passthru/vbdev_passthru.o 00:03:04.006 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:04.006 CC module/bdev/delay/vbdev_delay.o 00:03:04.006 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.006 CC module/bdev/malloc/bdev_malloc.o 00:03:04.006 CC module/bdev/lvol/vbdev_lvol.o 00:03:04.006 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:04.006 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:04.006 CC module/bdev/iscsi/bdev_iscsi.o 00:03:04.006 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:04.006 CC module/bdev/nvme/bdev_nvme.o 00:03:04.006 CC module/bdev/raid/bdev_raid.o 00:03:04.006 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:04.006 CC module/bdev/aio/bdev_aio.o 00:03:04.006 CC module/bdev/raid/bdev_raid_rpc.o 00:03:04.006 CC module/bdev/raid/bdev_raid_sb.o 00:03:04.006 CC module/bdev/aio/bdev_aio_rpc.o 00:03:04.006 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:04.006 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:04.006 CC module/bdev/raid/raid0.o 00:03:04.006 CC module/bdev/nvme/nvme_rpc.o 00:03:04.006 CC module/bdev/raid/raid1.o 00:03:04.006 CC module/bdev/null/bdev_null.o 00:03:04.006 CC module/bdev/ftl/bdev_ftl.o 00:03:04.006 CC module/bdev/nvme/bdev_mdns_client.o 00:03:04.006 CC module/bdev/raid/concat.o 00:03:04.006 CC module/bdev/null/bdev_null_rpc.o 00:03:04.006 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:04.006 CC module/bdev/nvme/vbdev_opal.o 00:03:04.006 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:04.006 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:04.006 CC module/bdev/split/vbdev_split_rpc.o 00:03:04.006 CC module/bdev/split/vbdev_split.o 00:03:04.006 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.006 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.006 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:04.006 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:04.006 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:04.273 LIB libspdk_blobfs_bdev.a 00:03:04.273 SO libspdk_blobfs_bdev.so.6.0 00:03:04.273 LIB libspdk_bdev_gpt.a 00:03:04.273 LIB libspdk_bdev_split.a 00:03:04.273 LIB libspdk_bdev_error.a 00:03:04.273 LIB libspdk_bdev_passthru.a 00:03:04.273 SO libspdk_bdev_split.so.6.0 00:03:04.273 LIB libspdk_bdev_null.a 00:03:04.273 SO libspdk_bdev_gpt.so.6.0 00:03:04.273 LIB libspdk_bdev_ftl.a 00:03:04.273 SO libspdk_bdev_error.so.6.0 00:03:04.273 SYMLINK libspdk_blobfs_bdev.so 00:03:04.273 SO libspdk_bdev_passthru.so.6.0 00:03:04.273 SO libspdk_bdev_null.so.6.0 00:03:04.273 LIB libspdk_bdev_malloc.a 00:03:04.273 LIB libspdk_bdev_delay.a 00:03:04.273 SO libspdk_bdev_ftl.so.6.0 00:03:04.273 SYMLINK libspdk_bdev_gpt.so 00:03:04.273 LIB libspdk_bdev_iscsi.a 00:03:04.273 LIB libspdk_bdev_aio.a 00:03:04.273 SYMLINK libspdk_bdev_split.so 00:03:04.273 LIB libspdk_bdev_zone_block.a 00:03:04.273 SYMLINK libspdk_bdev_passthru.so 00:03:04.273 SO libspdk_bdev_delay.so.6.0 00:03:04.273 SYMLINK libspdk_bdev_error.so 00:03:04.273 SO libspdk_bdev_malloc.so.6.0 00:03:04.273 SO libspdk_bdev_aio.so.6.0 00:03:04.273 SYMLINK libspdk_bdev_null.so 00:03:04.273 SO libspdk_bdev_iscsi.so.6.0 00:03:04.273 SO libspdk_bdev_zone_block.so.6.0 00:03:04.273 SYMLINK libspdk_bdev_ftl.so 00:03:04.273 SYMLINK libspdk_bdev_malloc.so 00:03:04.534 SYMLINK libspdk_bdev_delay.so 00:03:04.534 LIB libspdk_bdev_lvol.a 00:03:04.534 SYMLINK libspdk_bdev_aio.so 00:03:04.535 SYMLINK libspdk_bdev_iscsi.so 00:03:04.535 SYMLINK libspdk_bdev_zone_block.so 00:03:04.535 LIB libspdk_bdev_virtio.a 00:03:04.535 SO libspdk_bdev_lvol.so.6.0 00:03:04.535 SO libspdk_bdev_virtio.so.6.0 00:03:04.535 SYMLINK libspdk_bdev_lvol.so 00:03:04.535 SYMLINK libspdk_bdev_virtio.so 00:03:04.865 LIB libspdk_bdev_raid.a 00:03:04.865 SO libspdk_bdev_raid.so.6.0 00:03:05.126 SYMLINK libspdk_bdev_raid.so 00:03:06.071 LIB libspdk_bdev_nvme.a 00:03:06.071 SO libspdk_bdev_nvme.so.7.0 00:03:06.071 SYMLINK libspdk_bdev_nvme.so 00:03:06.644 CC module/event/subsystems/iobuf/iobuf.o 00:03:06.644 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:06.644 CC module/event/subsystems/vmd/vmd.o 00:03:06.644 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:06.644 CC module/event/subsystems/keyring/keyring.o 00:03:06.644 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:06.644 CC module/event/subsystems/scheduler/scheduler.o 00:03:06.906 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:06.906 CC module/event/subsystems/sock/sock.o 00:03:06.906 LIB libspdk_event_iobuf.a 00:03:06.906 LIB libspdk_event_vfu_tgt.a 00:03:06.906 LIB libspdk_event_keyring.a 00:03:06.906 LIB libspdk_event_vhost_blk.a 00:03:06.906 LIB libspdk_event_vmd.a 00:03:06.906 LIB libspdk_event_scheduler.a 00:03:06.906 LIB libspdk_event_sock.a 00:03:06.906 SO libspdk_event_vfu_tgt.so.3.0 00:03:06.906 SO libspdk_event_iobuf.so.3.0 00:03:06.906 SO libspdk_event_keyring.so.1.0 00:03:06.906 SO libspdk_event_scheduler.so.4.0 00:03:06.906 SO libspdk_event_vhost_blk.so.3.0 00:03:06.906 SO libspdk_event_vmd.so.6.0 00:03:06.906 SO libspdk_event_sock.so.5.0 00:03:06.906 SYMLINK libspdk_event_scheduler.so 00:03:06.906 SYMLINK libspdk_event_vmd.so 00:03:07.166 SYMLINK libspdk_event_vfu_tgt.so 00:03:07.166 SYMLINK libspdk_event_keyring.so 00:03:07.166 SYMLINK libspdk_event_iobuf.so 00:03:07.166 SYMLINK libspdk_event_vhost_blk.so 00:03:07.166 SYMLINK libspdk_event_sock.so 00:03:07.427 CC module/event/subsystems/accel/accel.o 00:03:07.427 LIB libspdk_event_accel.a 00:03:07.689 SO libspdk_event_accel.so.6.0 00:03:07.689 SYMLINK libspdk_event_accel.so 00:03:07.950 CC module/event/subsystems/bdev/bdev.o 00:03:08.211 LIB libspdk_event_bdev.a 00:03:08.211 SO libspdk_event_bdev.so.6.0 00:03:08.211 SYMLINK libspdk_event_bdev.so 00:03:08.785 CC module/event/subsystems/scsi/scsi.o 00:03:08.785 CC module/event/subsystems/ublk/ublk.o 00:03:08.785 CC module/event/subsystems/nbd/nbd.o 00:03:08.785 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:08.785 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:08.785 LIB libspdk_event_nbd.a 00:03:08.785 LIB libspdk_event_ublk.a 00:03:08.785 LIB libspdk_event_scsi.a 00:03:08.785 SO libspdk_event_nbd.so.6.0 00:03:08.785 SO libspdk_event_ublk.so.3.0 00:03:08.785 SO libspdk_event_scsi.so.6.0 00:03:08.785 LIB libspdk_event_nvmf.a 00:03:08.785 SYMLINK libspdk_event_nbd.so 00:03:08.785 SYMLINK libspdk_event_ublk.so 00:03:09.047 SYMLINK libspdk_event_scsi.so 00:03:09.047 SO libspdk_event_nvmf.so.6.0 00:03:09.047 SYMLINK libspdk_event_nvmf.so 00:03:09.309 CC module/event/subsystems/iscsi/iscsi.o 00:03:09.309 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:09.570 LIB libspdk_event_vhost_scsi.a 00:03:09.570 LIB libspdk_event_iscsi.a 00:03:09.570 SO libspdk_event_vhost_scsi.so.3.0 00:03:09.570 SO libspdk_event_iscsi.so.6.0 00:03:09.570 SYMLINK libspdk_event_vhost_scsi.so 00:03:09.570 SYMLINK libspdk_event_iscsi.so 00:03:09.832 SO libspdk.so.6.0 00:03:09.832 SYMLINK libspdk.so 00:03:10.093 CC app/spdk_nvme_perf/perf.o 00:03:10.093 CC app/trace_record/trace_record.o 00:03:10.093 CC app/spdk_nvme_discover/discovery_aer.o 00:03:10.093 CXX app/trace/trace.o 00:03:10.093 TEST_HEADER include/spdk/accel.h 00:03:10.093 TEST_HEADER include/spdk/accel_module.h 00:03:10.093 TEST_HEADER include/spdk/assert.h 00:03:10.093 TEST_HEADER include/spdk/barrier.h 00:03:10.093 CC app/spdk_top/spdk_top.o 00:03:10.093 TEST_HEADER include/spdk/base64.h 00:03:10.093 TEST_HEADER include/spdk/bdev_module.h 00:03:10.093 TEST_HEADER include/spdk/bdev.h 00:03:10.093 TEST_HEADER include/spdk/bdev_zone.h 00:03:10.093 TEST_HEADER include/spdk/bit_array.h 00:03:10.093 TEST_HEADER include/spdk/bit_pool.h 00:03:10.093 CC app/spdk_nvme_identify/identify.o 00:03:10.093 CC test/rpc_client/rpc_client_test.o 00:03:10.093 TEST_HEADER include/spdk/blob_bdev.h 00:03:10.093 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:10.093 TEST_HEADER include/spdk/blobfs.h 00:03:10.093 CC app/spdk_lspci/spdk_lspci.o 00:03:10.093 TEST_HEADER include/spdk/blob.h 00:03:10.093 TEST_HEADER include/spdk/conf.h 00:03:10.093 TEST_HEADER include/spdk/config.h 00:03:10.093 TEST_HEADER include/spdk/cpuset.h 00:03:10.093 TEST_HEADER include/spdk/crc16.h 00:03:10.093 TEST_HEADER include/spdk/crc32.h 00:03:10.093 TEST_HEADER include/spdk/crc64.h 00:03:10.093 TEST_HEADER include/spdk/dif.h 00:03:10.093 TEST_HEADER include/spdk/endian.h 00:03:10.093 TEST_HEADER include/spdk/dma.h 00:03:10.093 TEST_HEADER include/spdk/env.h 00:03:10.093 TEST_HEADER include/spdk/env_dpdk.h 00:03:10.093 TEST_HEADER include/spdk/event.h 00:03:10.093 TEST_HEADER include/spdk/fd_group.h 00:03:10.093 TEST_HEADER include/spdk/file.h 00:03:10.093 TEST_HEADER include/spdk/fd.h 00:03:10.093 TEST_HEADER include/spdk/gpt_spec.h 00:03:10.093 TEST_HEADER include/spdk/ftl.h 00:03:10.093 TEST_HEADER include/spdk/hexlify.h 00:03:10.093 TEST_HEADER include/spdk/idxd.h 00:03:10.093 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:10.093 TEST_HEADER include/spdk/histogram_data.h 00:03:10.093 TEST_HEADER include/spdk/init.h 00:03:10.093 CC app/iscsi_tgt/iscsi_tgt.o 00:03:10.093 TEST_HEADER include/spdk/idxd_spec.h 00:03:10.093 TEST_HEADER include/spdk/ioat.h 00:03:10.093 TEST_HEADER include/spdk/ioat_spec.h 00:03:10.093 TEST_HEADER include/spdk/iscsi_spec.h 00:03:10.093 TEST_HEADER include/spdk/jsonrpc.h 00:03:10.093 CC app/nvmf_tgt/nvmf_main.o 00:03:10.093 TEST_HEADER include/spdk/json.h 00:03:10.093 TEST_HEADER include/spdk/keyring.h 00:03:10.093 TEST_HEADER include/spdk/keyring_module.h 00:03:10.093 TEST_HEADER include/spdk/likely.h 00:03:10.094 TEST_HEADER include/spdk/log.h 00:03:10.094 TEST_HEADER include/spdk/lvol.h 00:03:10.094 TEST_HEADER include/spdk/memory.h 00:03:10.094 TEST_HEADER include/spdk/mmio.h 00:03:10.353 TEST_HEADER include/spdk/nbd.h 00:03:10.353 TEST_HEADER include/spdk/net.h 00:03:10.353 TEST_HEADER include/spdk/notify.h 00:03:10.353 CC app/spdk_dd/spdk_dd.o 00:03:10.353 TEST_HEADER include/spdk/nvme.h 00:03:10.353 TEST_HEADER include/spdk/nvme_intel.h 00:03:10.353 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:10.353 TEST_HEADER include/spdk/nvme_spec.h 00:03:10.353 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:10.353 TEST_HEADER include/spdk/nvme_zns.h 00:03:10.353 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:10.353 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:10.353 TEST_HEADER include/spdk/nvmf.h 00:03:10.353 TEST_HEADER include/spdk/nvmf_spec.h 00:03:10.353 TEST_HEADER include/spdk/nvmf_transport.h 00:03:10.353 TEST_HEADER include/spdk/opal.h 00:03:10.353 TEST_HEADER include/spdk/pci_ids.h 00:03:10.353 TEST_HEADER include/spdk/opal_spec.h 00:03:10.353 CC app/spdk_tgt/spdk_tgt.o 00:03:10.353 TEST_HEADER include/spdk/pipe.h 00:03:10.353 TEST_HEADER include/spdk/queue.h 00:03:10.353 TEST_HEADER include/spdk/reduce.h 00:03:10.353 TEST_HEADER include/spdk/rpc.h 00:03:10.353 TEST_HEADER include/spdk/scheduler.h 00:03:10.353 TEST_HEADER include/spdk/scsi.h 00:03:10.353 TEST_HEADER include/spdk/scsi_spec.h 00:03:10.353 TEST_HEADER include/spdk/sock.h 00:03:10.353 TEST_HEADER include/spdk/thread.h 00:03:10.353 TEST_HEADER include/spdk/stdinc.h 00:03:10.353 TEST_HEADER include/spdk/string.h 00:03:10.353 TEST_HEADER include/spdk/tree.h 00:03:10.353 TEST_HEADER include/spdk/trace.h 00:03:10.353 TEST_HEADER include/spdk/trace_parser.h 00:03:10.353 TEST_HEADER include/spdk/uuid.h 00:03:10.354 TEST_HEADER include/spdk/ublk.h 00:03:10.354 TEST_HEADER include/spdk/util.h 00:03:10.354 TEST_HEADER include/spdk/version.h 00:03:10.354 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:10.354 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:10.354 TEST_HEADER include/spdk/vhost.h 00:03:10.354 TEST_HEADER include/spdk/vmd.h 00:03:10.354 TEST_HEADER include/spdk/xor.h 00:03:10.354 TEST_HEADER include/spdk/zipf.h 00:03:10.354 CXX test/cpp_headers/accel.o 00:03:10.354 CXX test/cpp_headers/accel_module.o 00:03:10.354 CXX test/cpp_headers/assert.o 00:03:10.354 CXX test/cpp_headers/barrier.o 00:03:10.354 CXX test/cpp_headers/bdev.o 00:03:10.354 CXX test/cpp_headers/base64.o 00:03:10.354 CXX test/cpp_headers/bdev_module.o 00:03:10.354 CXX test/cpp_headers/bdev_zone.o 00:03:10.354 CXX test/cpp_headers/bit_pool.o 00:03:10.354 CXX test/cpp_headers/blob_bdev.o 00:03:10.354 CXX test/cpp_headers/bit_array.o 00:03:10.354 CXX test/cpp_headers/blobfs_bdev.o 00:03:10.354 CXX test/cpp_headers/blobfs.o 00:03:10.354 CXX test/cpp_headers/blob.o 00:03:10.354 CXX test/cpp_headers/conf.o 00:03:10.354 CXX test/cpp_headers/config.o 00:03:10.354 CXX test/cpp_headers/cpuset.o 00:03:10.354 CXX test/cpp_headers/crc32.o 00:03:10.354 CXX test/cpp_headers/crc16.o 00:03:10.354 CXX test/cpp_headers/crc64.o 00:03:10.354 CXX test/cpp_headers/dif.o 00:03:10.354 CXX test/cpp_headers/endian.o 00:03:10.354 CXX test/cpp_headers/dma.o 00:03:10.354 CXX test/cpp_headers/env_dpdk.o 00:03:10.354 CXX test/cpp_headers/env.o 00:03:10.354 CXX test/cpp_headers/event.o 00:03:10.354 CXX test/cpp_headers/fd.o 00:03:10.354 CXX test/cpp_headers/fd_group.o 00:03:10.354 CXX test/cpp_headers/file.o 00:03:10.354 CXX test/cpp_headers/gpt_spec.o 00:03:10.354 CXX test/cpp_headers/ftl.o 00:03:10.354 CXX test/cpp_headers/histogram_data.o 00:03:10.354 CXX test/cpp_headers/hexlify.o 00:03:10.354 CXX test/cpp_headers/idxd_spec.o 00:03:10.354 CXX test/cpp_headers/init.o 00:03:10.354 CXX test/cpp_headers/ioat.o 00:03:10.354 CXX test/cpp_headers/idxd.o 00:03:10.354 CXX test/cpp_headers/ioat_spec.o 00:03:10.354 CXX test/cpp_headers/json.o 00:03:10.354 CXX test/cpp_headers/iscsi_spec.o 00:03:10.354 CXX test/cpp_headers/jsonrpc.o 00:03:10.354 CXX test/cpp_headers/keyring_module.o 00:03:10.354 CXX test/cpp_headers/likely.o 00:03:10.354 CXX test/cpp_headers/keyring.o 00:03:10.354 CXX test/cpp_headers/lvol.o 00:03:10.354 CXX test/cpp_headers/log.o 00:03:10.354 CXX test/cpp_headers/memory.o 00:03:10.354 CXX test/cpp_headers/mmio.o 00:03:10.354 CXX test/cpp_headers/net.o 00:03:10.354 CXX test/cpp_headers/nbd.o 00:03:10.354 CXX test/cpp_headers/notify.o 00:03:10.354 CXX test/cpp_headers/nvme.o 00:03:10.354 CXX test/cpp_headers/nvme_ocssd.o 00:03:10.354 CXX test/cpp_headers/nvme_intel.o 00:03:10.354 CXX test/cpp_headers/nvme_spec.o 00:03:10.354 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:10.354 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:10.354 CXX test/cpp_headers/nvme_zns.o 00:03:10.354 CXX test/cpp_headers/nvmf_cmd.o 00:03:10.354 CXX test/cpp_headers/nvmf.o 00:03:10.354 CXX test/cpp_headers/opal.o 00:03:10.354 CXX test/cpp_headers/nvmf_transport.o 00:03:10.354 CXX test/cpp_headers/opal_spec.o 00:03:10.354 CXX test/cpp_headers/nvmf_spec.o 00:03:10.354 CXX test/cpp_headers/pci_ids.o 00:03:10.354 CXX test/cpp_headers/pipe.o 00:03:10.354 CXX test/cpp_headers/queue.o 00:03:10.354 CXX test/cpp_headers/rpc.o 00:03:10.354 CXX test/cpp_headers/reduce.o 00:03:10.354 CXX test/cpp_headers/scheduler.o 00:03:10.354 CXX test/cpp_headers/sock.o 00:03:10.354 CXX test/cpp_headers/scsi.o 00:03:10.354 CXX test/cpp_headers/scsi_spec.o 00:03:10.354 LINK spdk_lspci 00:03:10.354 CXX test/cpp_headers/stdinc.o 00:03:10.354 CXX test/cpp_headers/string.o 00:03:10.354 CXX test/cpp_headers/thread.o 00:03:10.354 CXX test/cpp_headers/trace_parser.o 00:03:10.354 CXX test/cpp_headers/trace.o 00:03:10.354 CXX test/cpp_headers/ublk.o 00:03:10.354 CXX test/cpp_headers/tree.o 00:03:10.354 CXX test/cpp_headers/util.o 00:03:10.354 CXX test/cpp_headers/version.o 00:03:10.354 CXX test/cpp_headers/uuid.o 00:03:10.354 CXX test/cpp_headers/vfio_user_pci.o 00:03:10.354 CXX test/cpp_headers/vfio_user_spec.o 00:03:10.354 CXX test/cpp_headers/vhost.o 00:03:10.354 CXX test/cpp_headers/vmd.o 00:03:10.354 CXX test/cpp_headers/xor.o 00:03:10.354 CXX test/cpp_headers/zipf.o 00:03:10.354 CC examples/ioat/verify/verify.o 00:03:10.354 CC examples/ioat/perf/perf.o 00:03:10.354 CC examples/util/zipf/zipf.o 00:03:10.354 CC test/app/histogram_perf/histogram_perf.o 00:03:10.354 CC test/thread/poller_perf/poller_perf.o 00:03:10.354 CC test/env/vtophys/vtophys.o 00:03:10.354 CC test/env/memory/memory_ut.o 00:03:10.354 CC test/app/stub/stub.o 00:03:10.354 CC test/app/jsoncat/jsoncat.o 00:03:10.616 CC test/env/pci/pci_ut.o 00:03:10.616 CC app/fio/nvme/fio_plugin.o 00:03:10.616 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:10.616 LINK spdk_nvme_discover 00:03:10.616 CC test/dma/test_dma/test_dma.o 00:03:10.616 LINK rpc_client_test 00:03:10.616 CC app/fio/bdev/fio_plugin.o 00:03:10.616 CC test/app/bdev_svc/bdev_svc.o 00:03:10.616 LINK interrupt_tgt 00:03:10.616 LINK nvmf_tgt 00:03:10.616 LINK spdk_trace_record 00:03:10.616 LINK iscsi_tgt 00:03:10.875 CC test/env/mem_callbacks/mem_callbacks.o 00:03:10.875 LINK spdk_tgt 00:03:10.875 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:10.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:10.875 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:10.875 LINK vtophys 00:03:10.875 LINK zipf 00:03:10.875 LINK jsoncat 00:03:10.875 LINK verify 00:03:10.875 LINK bdev_svc 00:03:11.135 LINK histogram_perf 00:03:11.135 LINK spdk_dd 00:03:11.135 LINK stub 00:03:11.135 LINK env_dpdk_post_init 00:03:11.135 LINK poller_perf 00:03:11.135 LINK ioat_perf 00:03:11.135 LINK spdk_trace 00:03:11.135 LINK test_dma 00:03:11.135 LINK spdk_nvme_perf 00:03:11.395 LINK pci_ut 00:03:11.395 LINK spdk_bdev 00:03:11.395 LINK nvme_fuzz 00:03:11.395 LINK mem_callbacks 00:03:11.395 CC examples/sock/hello_world/hello_sock.o 00:03:11.395 LINK spdk_nvme 00:03:11.395 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.395 CC examples/vmd/led/led.o 00:03:11.395 CC examples/idxd/perf/perf.o 00:03:11.395 LINK vhost_fuzz 00:03:11.395 CC examples/thread/thread/thread_ex.o 00:03:11.395 LINK spdk_nvme_identify 00:03:11.395 CC test/event/event_perf/event_perf.o 00:03:11.656 CC app/vhost/vhost.o 00:03:11.656 CC test/event/reactor/reactor.o 00:03:11.656 CC test/event/reactor_perf/reactor_perf.o 00:03:11.656 CC test/event/app_repeat/app_repeat.o 00:03:11.656 LINK spdk_top 00:03:11.656 LINK led 00:03:11.656 LINK lsvmd 00:03:11.656 CC test/event/scheduler/scheduler.o 00:03:11.656 LINK hello_sock 00:03:11.656 CC test/nvme/reset/reset.o 00:03:11.656 CC test/nvme/overhead/overhead.o 00:03:11.656 CC test/nvme/aer/aer.o 00:03:11.656 CC test/nvme/boot_partition/boot_partition.o 00:03:11.656 CC test/nvme/reserve/reserve.o 00:03:11.656 CC test/nvme/cuse/cuse.o 00:03:11.656 CC test/nvme/simple_copy/simple_copy.o 00:03:11.656 CC test/nvme/connect_stress/connect_stress.o 00:03:11.656 LINK event_perf 00:03:11.656 CC test/nvme/startup/startup.o 00:03:11.656 CC test/nvme/sgl/sgl.o 00:03:11.656 CC test/nvme/compliance/nvme_compliance.o 00:03:11.656 CC test/nvme/e2edp/nvme_dp.o 00:03:11.656 CC test/blobfs/mkfs/mkfs.o 00:03:11.656 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.656 CC test/nvme/fdp/fdp.o 00:03:11.656 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.656 CC test/nvme/err_injection/err_injection.o 00:03:11.656 LINK reactor 00:03:11.656 LINK app_repeat 00:03:11.656 LINK idxd_perf 00:03:11.656 LINK reactor_perf 00:03:11.656 LINK vhost 00:03:11.656 CC test/accel/dif/dif.o 00:03:11.656 LINK thread 00:03:11.917 LINK memory_ut 00:03:11.917 CC test/lvol/esnap/esnap.o 00:03:11.917 LINK scheduler 00:03:11.917 LINK boot_partition 00:03:11.917 LINK connect_stress 00:03:11.917 LINK startup 00:03:11.917 LINK reserve 00:03:11.917 LINK err_injection 00:03:11.917 LINK doorbell_aers 00:03:11.917 LINK fused_ordering 00:03:11.917 LINK reset 00:03:11.917 LINK mkfs 00:03:11.917 LINK simple_copy 00:03:11.917 LINK aer 00:03:11.917 LINK nvme_dp 00:03:11.917 LINK overhead 00:03:11.917 LINK sgl 00:03:11.917 LINK fdp 00:03:11.917 LINK nvme_compliance 00:03:12.179 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:12.179 CC examples/nvme/arbitration/arbitration.o 00:03:12.179 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.179 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.179 CC examples/nvme/abort/abort.o 00:03:12.179 CC examples/nvme/reconnect/reconnect.o 00:03:12.179 CC examples/nvme/hotplug/hotplug.o 00:03:12.179 CC examples/nvme/hello_world/hello_world.o 00:03:12.179 LINK dif 00:03:12.179 CC examples/accel/perf/accel_perf.o 00:03:12.179 LINK pmr_persistence 00:03:12.179 LINK cmb_copy 00:03:12.179 LINK iscsi_fuzz 00:03:12.179 CC examples/blob/cli/blobcli.o 00:03:12.179 CC examples/blob/hello_world/hello_blob.o 00:03:12.441 LINK hello_world 00:03:12.441 LINK hotplug 00:03:12.441 LINK arbitration 00:03:12.441 LINK reconnect 00:03:12.441 LINK abort 00:03:12.441 LINK nvme_manage 00:03:12.703 LINK hello_blob 00:03:12.703 LINK accel_perf 00:03:12.703 CC test/bdev/bdevio/bdevio.o 00:03:12.703 LINK blobcli 00:03:12.966 LINK cuse 00:03:13.227 LINK bdevio 00:03:13.227 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.227 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.490 LINK hello_bdev 00:03:14.064 LINK bdevperf 00:03:14.637 CC examples/nvmf/nvmf/nvmf.o 00:03:14.898 LINK nvmf 00:03:16.287 LINK esnap 00:03:16.287 00:03:16.287 real 0m51.389s 00:03:16.287 user 6m31.604s 00:03:16.287 sys 4m14.588s 00:03:16.287 09:51:55 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:16.287 09:51:55 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.287 ************************************ 00:03:16.287 END TEST make 00:03:16.287 ************************************ 00:03:16.550 09:51:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.550 09:51:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.550 09:51:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.550 09:51:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.550 09:51:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.550 09:51:55 -- pm/common@44 -- $ pid=953857 00:03:16.550 09:51:55 -- pm/common@50 -- $ kill -TERM 953857 00:03:16.550 09:51:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.550 09:51:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.550 09:51:55 -- pm/common@44 -- $ pid=953858 00:03:16.550 09:51:55 -- pm/common@50 -- $ kill -TERM 953858 00:03:16.550 09:51:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.550 09:51:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:16.550 09:51:55 -- pm/common@44 -- $ pid=953860 00:03:16.550 09:51:55 -- pm/common@50 -- $ kill -TERM 953860 00:03:16.550 09:51:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.550 09:51:55 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:16.550 09:51:55 -- pm/common@44 -- $ pid=953876 00:03:16.550 09:51:55 -- pm/common@50 -- $ sudo -E kill -TERM 953876 00:03:16.550 09:51:55 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.550 09:51:55 -- nvmf/common.sh@7 -- # uname -s 00:03:16.550 09:51:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.550 09:51:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.550 09:51:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.550 09:51:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.550 09:51:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.550 09:51:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.550 09:51:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.550 09:51:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.550 09:51:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.550 09:51:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.550 09:51:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:16.550 09:51:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:16.550 09:51:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.550 09:51:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.550 09:51:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.550 09:51:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.550 09:51:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:16.550 09:51:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.550 09:51:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.550 09:51:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.550 09:51:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.550 09:51:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.550 09:51:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.550 09:51:55 -- paths/export.sh@5 -- # export PATH 00:03:16.550 09:51:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.550 09:51:55 -- nvmf/common.sh@47 -- # : 0 00:03:16.550 09:51:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:16.550 09:51:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:16.550 09:51:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.550 09:51:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.550 09:51:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.550 09:51:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:16.550 09:51:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:16.550 09:51:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:16.550 09:51:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.550 09:51:55 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.550 09:51:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.550 09:51:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.550 09:51:55 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.551 09:51:55 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.551 09:51:55 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.551 09:51:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.551 09:51:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.551 09:51:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.551 09:51:55 -- spdk/autotest.sh@48 -- # udevadm_pid=1017581 00:03:16.551 09:51:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.551 09:51:55 -- pm/common@17 -- # local monitor 00:03:16.551 09:51:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.551 09:51:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.551 09:51:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.551 09:51:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.551 09:51:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.551 09:51:55 -- pm/common@21 -- # date +%s 00:03:16.551 09:51:55 -- pm/common@25 -- # sleep 1 00:03:16.551 09:51:55 -- pm/common@21 -- # date +%s 00:03:16.551 09:51:55 -- pm/common@21 -- # date +%s 00:03:16.551 09:51:55 -- pm/common@21 -- # date +%s 00:03:16.551 09:51:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893915 00:03:16.551 09:51:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893915 00:03:16.551 09:51:55 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893915 00:03:16.551 09:51:55 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893915 00:03:16.551 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893915_collect-vmstat.pm.log 00:03:16.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893915_collect-cpu-load.pm.log 00:03:16.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893915_collect-cpu-temp.pm.log 00:03:16.813 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893915_collect-bmc-pm.bmc.pm.log 00:03:17.759 09:51:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.759 09:51:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.759 09:51:56 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.759 09:51:56 -- common/autotest_common.sh@10 -- # set +x 00:03:17.759 09:51:56 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.759 09:51:56 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:17.759 09:51:56 -- common/autotest_common.sh@10 -- # set +x 00:03:17.759 09:51:56 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:17.759 09:51:56 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.759 09:51:56 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.759 09:51:56 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:17.759 09:51:56 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.759 09:51:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.759 09:51:56 -- common/autotest_common.sh@1455 -- # uname 00:03:17.759 09:51:56 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:17.759 09:51:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.759 09:51:56 -- common/autotest_common.sh@1475 -- # uname 00:03:17.759 09:51:56 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:17.759 09:51:56 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:17.759 09:51:56 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:17.759 09:51:56 -- spdk/autotest.sh@72 -- # hash lcov 00:03:17.759 09:51:56 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:17.759 09:51:56 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:17.759 --rc lcov_branch_coverage=1 00:03:17.759 --rc lcov_function_coverage=1 00:03:17.759 --rc genhtml_branch_coverage=1 00:03:17.759 --rc genhtml_function_coverage=1 00:03:17.759 --rc genhtml_legend=1 00:03:17.759 --rc geninfo_all_blocks=1 00:03:17.759 ' 00:03:17.759 09:51:56 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:17.759 --rc lcov_branch_coverage=1 00:03:17.759 --rc lcov_function_coverage=1 00:03:17.759 --rc genhtml_branch_coverage=1 00:03:17.759 --rc genhtml_function_coverage=1 00:03:17.759 --rc genhtml_legend=1 00:03:17.759 --rc geninfo_all_blocks=1 00:03:17.759 ' 00:03:17.759 09:51:56 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:17.759 --rc lcov_branch_coverage=1 00:03:17.759 --rc lcov_function_coverage=1 00:03:17.759 --rc genhtml_branch_coverage=1 00:03:17.759 --rc genhtml_function_coverage=1 00:03:17.759 --rc genhtml_legend=1 00:03:17.759 --rc geninfo_all_blocks=1 00:03:17.759 --no-external' 00:03:17.759 09:51:56 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:17.759 --rc lcov_branch_coverage=1 00:03:17.759 --rc lcov_function_coverage=1 00:03:17.759 --rc genhtml_branch_coverage=1 00:03:17.759 --rc genhtml_function_coverage=1 00:03:17.759 --rc genhtml_legend=1 00:03:17.759 --rc geninfo_all_blocks=1 00:03:17.759 --no-external' 00:03:17.759 09:51:56 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:17.759 lcov: LCOV version 1.14 00:03:17.759 09:51:56 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:32.720 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.720 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:44.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:44.957 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:44.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:44.958 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:44.958 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:44.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:44.959 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:46.876 09:52:25 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:46.876 09:52:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:46.876 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:03:46.876 09:52:25 -- spdk/autotest.sh@91 -- # rm -f 00:03:46.876 09:52:25 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.087 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:51.087 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:51.087 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:51.087 09:52:30 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:51.087 09:52:30 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.087 09:52:30 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.087 09:52:30 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.087 09:52:30 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:51.087 09:52:30 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:51.087 09:52:30 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:51.087 09:52:30 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:51.087 09:52:30 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:51.087 09:52:30 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:51.087 09:52:30 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.087 09:52:30 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.087 09:52:30 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:51.087 09:52:30 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:51.087 09:52:30 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:51.087 No valid GPT data, bailing 00:03:51.087 09:52:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:51.087 09:52:30 -- scripts/common.sh@391 -- # pt= 00:03:51.087 09:52:30 -- scripts/common.sh@392 -- # return 1 00:03:51.087 09:52:30 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:51.087 1+0 records in 00:03:51.087 1+0 records out 00:03:51.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566113 s, 185 MB/s 00:03:51.087 09:52:30 -- spdk/autotest.sh@118 -- # sync 00:03:51.087 09:52:30 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:51.087 09:52:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:51.087 09:52:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:59.232 09:52:38 -- spdk/autotest.sh@124 -- # uname -s 00:03:59.232 09:52:38 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:59.232 09:52:38 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:59.232 09:52:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.232 09:52:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.232 09:52:38 -- common/autotest_common.sh@10 -- # set +x 00:03:59.232 ************************************ 00:03:59.232 START TEST setup.sh 00:03:59.232 ************************************ 00:03:59.232 09:52:38 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:59.232 * Looking for test storage... 00:03:59.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.232 09:52:38 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:59.232 09:52:38 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:59.232 09:52:38 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:59.232 09:52:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:59.232 09:52:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:59.232 09:52:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:59.232 ************************************ 00:03:59.232 START TEST acl 00:03:59.232 ************************************ 00:03:59.232 09:52:38 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:59.493 * Looking for test storage... 00:03:59.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:59.493 09:52:38 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.493 09:52:38 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:59.493 09:52:38 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:59.493 09:52:38 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:59.493 09:52:38 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:59.493 09:52:38 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:59.493 09:52:38 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:59.493 09:52:38 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.493 09:52:38 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:03.717 09:52:42 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:03.717 09:52:42 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:03.717 09:52:42 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:03.717 09:52:42 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:03.717 09:52:42 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.717 09:52:42 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:07.021 Hugepages 00:04:07.021 node hugesize free / total 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 00:04:07.021 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:07.021 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:07.022 09:52:45 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:07.022 09:52:45 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.022 09:52:45 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.022 09:52:45 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:07.022 ************************************ 00:04:07.022 START TEST denied 00:04:07.022 ************************************ 00:04:07.022 09:52:45 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:07.022 09:52:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:07.022 09:52:45 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:07.022 09:52:45 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:07.022 09:52:45 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.022 09:52:45 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.230 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.230 09:52:49 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.439 00:04:15.439 real 0m8.532s 00:04:15.439 user 0m2.816s 00:04:15.439 sys 0m4.984s 00:04:15.439 09:52:54 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.439 09:52:54 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:15.439 ************************************ 00:04:15.439 END TEST denied 00:04:15.439 ************************************ 00:04:15.439 09:52:54 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:15.439 09:52:54 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.439 09:52:54 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.439 09:52:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:15.439 ************************************ 00:04:15.439 START TEST allowed 00:04:15.439 ************************************ 00:04:15.439 09:52:54 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:15.439 09:52:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:15.439 09:52:54 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:15.439 09:52:54 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:15.439 09:52:54 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.439 09:52:54 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.030 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:22.030 09:53:00 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:22.030 09:53:00 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:22.030 09:53:00 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:22.030 09:53:00 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.030 09:53:00 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.337 00:04:25.337 real 0m9.482s 00:04:25.337 user 0m2.750s 00:04:25.337 sys 0m4.994s 00:04:25.338 09:53:04 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.338 09:53:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:25.338 ************************************ 00:04:25.338 END TEST allowed 00:04:25.338 ************************************ 00:04:25.338 00:04:25.338 real 0m25.741s 00:04:25.338 user 0m8.466s 00:04:25.338 sys 0m14.981s 00:04:25.338 09:53:04 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.338 09:53:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.338 ************************************ 00:04:25.338 END TEST acl 00:04:25.338 ************************************ 00:04:25.338 09:53:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:25.338 09:53:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.338 09:53:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.338 09:53:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.338 ************************************ 00:04:25.338 START TEST hugepages 00:04:25.338 ************************************ 00:04:25.338 09:53:04 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:25.338 * Looking for test storage... 00:04:25.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 102507004 kB' 'MemAvailable: 106224804 kB' 'Buffers: 2704 kB' 'Cached: 14796760 kB' 'SwapCached: 0 kB' 'Active: 11640948 kB' 'Inactive: 3693560 kB' 'Active(anon): 11161148 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538616 kB' 'Mapped: 226732 kB' 'Shmem: 10626104 kB' 'KReclaimable: 585384 kB' 'Slab: 1466380 kB' 'SReclaimable: 585384 kB' 'SUnreclaim: 880996 kB' 'KernelStack: 27232 kB' 'PageTables: 9180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460872 kB' 'Committed_AS: 12739932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.338 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:25.339 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.340 09:53:04 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:25.340 09:53:04 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.340 09:53:04 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.340 09:53:04 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.340 ************************************ 00:04:25.340 START TEST default_setup 00:04:25.340 ************************************ 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.340 09:53:04 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.647 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:28.647 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:28.908 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.175 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104672524 kB' 'MemAvailable: 108390260 kB' 'Buffers: 2704 kB' 'Cached: 14796876 kB' 'SwapCached: 0 kB' 'Active: 11659096 kB' 'Inactive: 3693560 kB' 'Active(anon): 11179296 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556456 kB' 'Mapped: 227064 kB' 'Shmem: 10626220 kB' 'KReclaimable: 585320 kB' 'Slab: 1464532 kB' 'SReclaimable: 585320 kB' 'SUnreclaim: 879212 kB' 'KernelStack: 27280 kB' 'PageTables: 9168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12756504 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235912 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.176 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104672432 kB' 'MemAvailable: 108390168 kB' 'Buffers: 2704 kB' 'Cached: 14796880 kB' 'SwapCached: 0 kB' 'Active: 11659060 kB' 'Inactive: 3693560 kB' 'Active(anon): 11179260 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556540 kB' 'Mapped: 228068 kB' 'Shmem: 10626224 kB' 'KReclaimable: 585320 kB' 'Slab: 1464524 kB' 'SReclaimable: 585320 kB' 'SUnreclaim: 879204 kB' 'KernelStack: 27312 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12759524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.177 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.178 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104674052 kB' 'MemAvailable: 108391788 kB' 'Buffers: 2704 kB' 'Cached: 14796880 kB' 'SwapCached: 0 kB' 'Active: 11658780 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178980 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556240 kB' 'Mapped: 227060 kB' 'Shmem: 10626224 kB' 'KReclaimable: 585320 kB' 'Slab: 1464556 kB' 'SReclaimable: 585320 kB' 'SUnreclaim: 879236 kB' 'KernelStack: 27264 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12756544 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.179 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.180 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:29.181 nr_hugepages=1024 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.181 resv_hugepages=0 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.181 surplus_hugepages=0 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.181 anon_hugepages=0 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104673368 kB' 'MemAvailable: 108391104 kB' 'Buffers: 2704 kB' 'Cached: 14796880 kB' 'SwapCached: 0 kB' 'Active: 11658536 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178736 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555924 kB' 'Mapped: 227060 kB' 'Shmem: 10626224 kB' 'KReclaimable: 585320 kB' 'Slab: 1464556 kB' 'SReclaimable: 585320 kB' 'SUnreclaim: 879236 kB' 'KernelStack: 27216 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12756568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235960 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.181 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.182 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57938140 kB' 'MemUsed: 7720868 kB' 'SwapCached: 0 kB' 'Active: 2720720 kB' 'Inactive: 235936 kB' 'Active(anon): 2481296 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2692664 kB' 'Mapped: 94552 kB' 'AnonPages: 267196 kB' 'Shmem: 2217304 kB' 'KernelStack: 15192 kB' 'PageTables: 5724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 784256 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.183 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:29.184 node0=1024 expecting 1024 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:29.184 00:04:29.184 real 0m3.921s 00:04:29.184 user 0m1.499s 00:04:29.184 sys 0m2.401s 00:04:29.184 09:53:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:29.185 09:53:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:29.185 ************************************ 00:04:29.185 END TEST default_setup 00:04:29.185 ************************************ 00:04:29.185 09:53:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:29.185 09:53:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.185 09:53:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.185 09:53:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.446 ************************************ 00:04:29.446 START TEST per_node_1G_alloc 00:04:29.446 ************************************ 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:29.446 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.447 09:53:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.786 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:32.786 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:32.786 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104688328 kB' 'MemAvailable: 108406032 kB' 'Buffers: 2704 kB' 'Cached: 14797036 kB' 'SwapCached: 0 kB' 'Active: 11657020 kB' 'Inactive: 3693560 kB' 'Active(anon): 11177220 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553604 kB' 'Mapped: 226012 kB' 'Shmem: 10626380 kB' 'KReclaimable: 585288 kB' 'Slab: 1464992 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879704 kB' 'KernelStack: 27248 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12748408 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236088 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.051 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.052 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104690132 kB' 'MemAvailable: 108407836 kB' 'Buffers: 2704 kB' 'Cached: 14797040 kB' 'SwapCached: 0 kB' 'Active: 11657560 kB' 'Inactive: 3693560 kB' 'Active(anon): 11177760 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553908 kB' 'Mapped: 226088 kB' 'Shmem: 10626384 kB' 'KReclaimable: 585288 kB' 'Slab: 1465024 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879736 kB' 'KernelStack: 27088 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12746812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236072 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.053 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.054 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.055 09:53:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104691252 kB' 'MemAvailable: 108408956 kB' 'Buffers: 2704 kB' 'Cached: 14797056 kB' 'SwapCached: 0 kB' 'Active: 11656852 kB' 'Inactive: 3693560 kB' 'Active(anon): 11177052 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553844 kB' 'Mapped: 226004 kB' 'Shmem: 10626400 kB' 'KReclaimable: 585288 kB' 'Slab: 1464988 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879700 kB' 'KernelStack: 27200 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12748204 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236104 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.055 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.056 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.057 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.058 nr_hugepages=1024 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.058 resv_hugepages=0 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.058 surplus_hugepages=0 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.058 anon_hugepages=0 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104692596 kB' 'MemAvailable: 108410300 kB' 'Buffers: 2704 kB' 'Cached: 14797080 kB' 'SwapCached: 0 kB' 'Active: 11656736 kB' 'Inactive: 3693560 kB' 'Active(anon): 11176936 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553784 kB' 'Mapped: 226004 kB' 'Shmem: 10626424 kB' 'KReclaimable: 585288 kB' 'Slab: 1464988 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879700 kB' 'KernelStack: 27312 kB' 'PageTables: 9160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12748472 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236120 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.058 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.059 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58996632 kB' 'MemUsed: 6662376 kB' 'SwapCached: 0 kB' 'Active: 2721568 kB' 'Inactive: 235936 kB' 'Active(anon): 2482144 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2692720 kB' 'Mapped: 93664 kB' 'AnonPages: 267952 kB' 'Shmem: 2217360 kB' 'KernelStack: 15368 kB' 'PageTables: 5944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 784556 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513872 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.060 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.061 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45696296 kB' 'MemUsed: 14983540 kB' 'SwapCached: 0 kB' 'Active: 8935532 kB' 'Inactive: 3457624 kB' 'Active(anon): 8695156 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12107104 kB' 'Mapped: 132340 kB' 'AnonPages: 286092 kB' 'Shmem: 8409104 kB' 'KernelStack: 11944 kB' 'PageTables: 3056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314604 kB' 'Slab: 680464 kB' 'SReclaimable: 314604 kB' 'SUnreclaim: 365860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.062 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.063 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:33.064 node0=512 expecting 512 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:33.064 node1=512 expecting 512 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:33.064 00:04:33.064 real 0m3.826s 00:04:33.064 user 0m1.581s 00:04:33.064 sys 0m2.303s 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.064 09:53:12 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:33.064 ************************************ 00:04:33.064 END TEST per_node_1G_alloc 00:04:33.064 ************************************ 00:04:33.064 09:53:12 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:33.064 09:53:12 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.064 09:53:12 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.064 09:53:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.325 ************************************ 00:04:33.325 START TEST even_2G_alloc 00:04:33.325 ************************************ 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.325 09:53:12 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.628 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:36.628 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:36.628 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104719716 kB' 'MemAvailable: 108437420 kB' 'Buffers: 2704 kB' 'Cached: 14797220 kB' 'SwapCached: 0 kB' 'Active: 11658708 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178908 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554560 kB' 'Mapped: 226096 kB' 'Shmem: 10626564 kB' 'KReclaimable: 585288 kB' 'Slab: 1465212 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879924 kB' 'KernelStack: 27232 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12749168 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236120 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.895 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.896 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104720596 kB' 'MemAvailable: 108438300 kB' 'Buffers: 2704 kB' 'Cached: 14797224 kB' 'SwapCached: 0 kB' 'Active: 11659252 kB' 'Inactive: 3693560 kB' 'Active(anon): 11179452 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555156 kB' 'Mapped: 226104 kB' 'Shmem: 10626568 kB' 'KReclaimable: 585288 kB' 'Slab: 1465212 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879924 kB' 'KernelStack: 27120 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12746456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.897 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.898 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104721424 kB' 'MemAvailable: 108439128 kB' 'Buffers: 2704 kB' 'Cached: 14797240 kB' 'SwapCached: 0 kB' 'Active: 11656876 kB' 'Inactive: 3693560 kB' 'Active(anon): 11177076 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553752 kB' 'Mapped: 226004 kB' 'Shmem: 10626584 kB' 'KReclaimable: 585288 kB' 'Slab: 1464952 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879664 kB' 'KernelStack: 27136 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12746480 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.899 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.900 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:36.901 nr_hugepages=1024 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:36.901 resv_hugepages=0 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:36.901 surplus_hugepages=0 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:36.901 anon_hugepages=0 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.901 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104721172 kB' 'MemAvailable: 108438876 kB' 'Buffers: 2704 kB' 'Cached: 14797256 kB' 'SwapCached: 0 kB' 'Active: 11657332 kB' 'Inactive: 3693560 kB' 'Active(anon): 11177532 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554232 kB' 'Mapped: 226004 kB' 'Shmem: 10626600 kB' 'KReclaimable: 585288 kB' 'Slab: 1464952 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879664 kB' 'KernelStack: 27168 kB' 'PageTables: 8884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12746872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235976 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.902 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:36.903 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58997804 kB' 'MemUsed: 6661204 kB' 'SwapCached: 0 kB' 'Active: 2721056 kB' 'Inactive: 235936 kB' 'Active(anon): 2481632 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2692748 kB' 'Mapped: 93664 kB' 'AnonPages: 267432 kB' 'Shmem: 2217388 kB' 'KernelStack: 15128 kB' 'PageTables: 5512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 784368 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.904 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45722796 kB' 'MemUsed: 14957040 kB' 'SwapCached: 0 kB' 'Active: 8936684 kB' 'Inactive: 3457624 kB' 'Active(anon): 8696308 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12107276 kB' 'Mapped: 132340 kB' 'AnonPages: 287160 kB' 'Shmem: 8409276 kB' 'KernelStack: 12056 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314604 kB' 'Slab: 680584 kB' 'SReclaimable: 314604 kB' 'SUnreclaim: 365980 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.905 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.906 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:36.907 node0=512 expecting 512 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:36.907 node1=512 expecting 512 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:36.907 00:04:36.907 real 0m3.789s 00:04:36.907 user 0m1.453s 00:04:36.907 sys 0m2.399s 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:36.907 09:53:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:36.907 ************************************ 00:04:36.907 END TEST even_2G_alloc 00:04:36.907 ************************************ 00:04:37.168 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.168 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.168 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.168 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.168 ************************************ 00:04:37.168 START TEST odd_alloc 00:04:37.168 ************************************ 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.168 09:53:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.471 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:40.471 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:40.471 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.738 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104694644 kB' 'MemAvailable: 108412348 kB' 'Buffers: 2704 kB' 'Cached: 14797416 kB' 'SwapCached: 0 kB' 'Active: 11657964 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178164 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554712 kB' 'Mapped: 226032 kB' 'Shmem: 10626760 kB' 'KReclaimable: 585288 kB' 'Slab: 1465176 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879888 kB' 'KernelStack: 27168 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12747888 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236008 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.739 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104695396 kB' 'MemAvailable: 108413100 kB' 'Buffers: 2704 kB' 'Cached: 14797420 kB' 'SwapCached: 0 kB' 'Active: 11658244 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178444 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555028 kB' 'Mapped: 226024 kB' 'Shmem: 10626764 kB' 'KReclaimable: 585288 kB' 'Slab: 1465208 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879920 kB' 'KernelStack: 27200 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12747908 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.740 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.741 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104696216 kB' 'MemAvailable: 108413920 kB' 'Buffers: 2704 kB' 'Cached: 14797452 kB' 'SwapCached: 0 kB' 'Active: 11658356 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178556 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555036 kB' 'Mapped: 226024 kB' 'Shmem: 10626796 kB' 'KReclaimable: 585288 kB' 'Slab: 1465208 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879920 kB' 'KernelStack: 27200 kB' 'PageTables: 8956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12747928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.742 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.743 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:40.744 nr_hugepages=1025 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.744 resv_hugepages=0 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.744 surplus_hugepages=0 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.744 anon_hugepages=0 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104697228 kB' 'MemAvailable: 108414932 kB' 'Buffers: 2704 kB' 'Cached: 14797476 kB' 'SwapCached: 0 kB' 'Active: 11657972 kB' 'Inactive: 3693560 kB' 'Active(anon): 11178172 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554640 kB' 'Mapped: 226024 kB' 'Shmem: 10626820 kB' 'KReclaimable: 585288 kB' 'Slab: 1465208 kB' 'SReclaimable: 585288 kB' 'SUnreclaim: 879920 kB' 'KernelStack: 27184 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508424 kB' 'Committed_AS: 12747948 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.744 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.745 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58984916 kB' 'MemUsed: 6674092 kB' 'SwapCached: 0 kB' 'Active: 2720144 kB' 'Inactive: 235936 kB' 'Active(anon): 2480720 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2692832 kB' 'Mapped: 93684 kB' 'AnonPages: 266456 kB' 'Shmem: 2217472 kB' 'KernelStack: 15128 kB' 'PageTables: 5528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 784104 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513420 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.746 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.747 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 45713428 kB' 'MemUsed: 14966408 kB' 'SwapCached: 0 kB' 'Active: 8938108 kB' 'Inactive: 3457624 kB' 'Active(anon): 8697732 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12107352 kB' 'Mapped: 132340 kB' 'AnonPages: 288484 kB' 'Shmem: 8409352 kB' 'KernelStack: 12040 kB' 'PageTables: 3340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314604 kB' 'Slab: 681104 kB' 'SReclaimable: 314604 kB' 'SUnreclaim: 366500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.748 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.749 09:53:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:41.011 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:41.012 node0=512 expecting 513 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:41.012 node1=513 expecting 512 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:41.012 00:04:41.012 real 0m3.783s 00:04:41.012 user 0m1.460s 00:04:41.012 sys 0m2.383s 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.012 09:53:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.012 ************************************ 00:04:41.012 END TEST odd_alloc 00:04:41.012 ************************************ 00:04:41.012 09:53:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:41.012 09:53:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.012 09:53:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.012 09:53:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.012 ************************************ 00:04:41.012 START TEST custom_alloc 00:04:41.012 ************************************ 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.012 09:53:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.318 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:44.318 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:44.318 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103671780 kB' 'MemAvailable: 107389388 kB' 'Buffers: 2704 kB' 'Cached: 14797588 kB' 'SwapCached: 0 kB' 'Active: 11659952 kB' 'Inactive: 3693560 kB' 'Active(anon): 11180152 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556596 kB' 'Mapped: 226112 kB' 'Shmem: 10626932 kB' 'KReclaimable: 585192 kB' 'Slab: 1464672 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879480 kB' 'KernelStack: 27152 kB' 'PageTables: 8828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12748704 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235800 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.583 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.584 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103671244 kB' 'MemAvailable: 107388852 kB' 'Buffers: 2704 kB' 'Cached: 14797592 kB' 'SwapCached: 0 kB' 'Active: 11659908 kB' 'Inactive: 3693560 kB' 'Active(anon): 11180108 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556544 kB' 'Mapped: 226048 kB' 'Shmem: 10626936 kB' 'KReclaimable: 585192 kB' 'Slab: 1464716 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879524 kB' 'KernelStack: 27152 kB' 'PageTables: 8812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12748724 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235768 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.585 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.586 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103671244 kB' 'MemAvailable: 107388852 kB' 'Buffers: 2704 kB' 'Cached: 14797592 kB' 'SwapCached: 0 kB' 'Active: 11660712 kB' 'Inactive: 3693560 kB' 'Active(anon): 11180912 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557400 kB' 'Mapped: 226048 kB' 'Shmem: 10626936 kB' 'KReclaimable: 585192 kB' 'Slab: 1464716 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879524 kB' 'KernelStack: 27152 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12748744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235784 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.587 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.588 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:44.589 nr_hugepages=1536 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.589 resv_hugepages=0 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.589 surplus_hugepages=0 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.589 anon_hugepages=0 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 103671736 kB' 'MemAvailable: 107389344 kB' 'Buffers: 2704 kB' 'Cached: 14797632 kB' 'SwapCached: 0 kB' 'Active: 11659704 kB' 'Inactive: 3693560 kB' 'Active(anon): 11179904 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556292 kB' 'Mapped: 226048 kB' 'Shmem: 10626976 kB' 'KReclaimable: 585192 kB' 'Slab: 1464716 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879524 kB' 'KernelStack: 27120 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985160 kB' 'Committed_AS: 12748764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235768 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.589 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.590 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.854 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 58995524 kB' 'MemUsed: 6663484 kB' 'SwapCached: 0 kB' 'Active: 2720480 kB' 'Inactive: 235936 kB' 'Active(anon): 2481056 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2692972 kB' 'Mapped: 93704 kB' 'AnonPages: 266600 kB' 'Shmem: 2217612 kB' 'KernelStack: 15048 kB' 'PageTables: 5280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 783952 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.855 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679836 kB' 'MemFree: 44676552 kB' 'MemUsed: 16003284 kB' 'SwapCached: 0 kB' 'Active: 8939408 kB' 'Inactive: 3457624 kB' 'Active(anon): 8699032 kB' 'Inactive(anon): 0 kB' 'Active(file): 240376 kB' 'Inactive(file): 3457624 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12107384 kB' 'Mapped: 132344 kB' 'AnonPages: 289836 kB' 'Shmem: 8409384 kB' 'KernelStack: 12056 kB' 'PageTables: 3388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 314508 kB' 'Slab: 680764 kB' 'SReclaimable: 314508 kB' 'SUnreclaim: 366256 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.856 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.857 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.858 node0=512 expecting 512 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:44.858 node1=1024 expecting 1024 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:44.858 00:04:44.858 real 0m3.831s 00:04:44.858 user 0m1.510s 00:04:44.858 sys 0m2.383s 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.858 09:53:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.858 ************************************ 00:04:44.858 END TEST custom_alloc 00:04:44.858 ************************************ 00:04:44.858 09:53:23 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:44.858 09:53:23 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.858 09:53:23 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.858 09:53:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.858 ************************************ 00:04:44.858 START TEST no_shrink_alloc 00:04:44.858 ************************************ 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.858 09:53:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.165 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:48.165 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:48.165 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104730656 kB' 'MemAvailable: 108448264 kB' 'Buffers: 2704 kB' 'Cached: 14797764 kB' 'SwapCached: 0 kB' 'Active: 11663280 kB' 'Inactive: 3693560 kB' 'Active(anon): 11183480 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559116 kB' 'Mapped: 226168 kB' 'Shmem: 10627108 kB' 'KReclaimable: 585192 kB' 'Slab: 1464916 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879724 kB' 'KernelStack: 27424 kB' 'PageTables: 9528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12753092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236168 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.428 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.429 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104728696 kB' 'MemAvailable: 108446304 kB' 'Buffers: 2704 kB' 'Cached: 14797764 kB' 'SwapCached: 0 kB' 'Active: 11662552 kB' 'Inactive: 3693560 kB' 'Active(anon): 11182752 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558464 kB' 'Mapped: 226228 kB' 'Shmem: 10627108 kB' 'KReclaimable: 585192 kB' 'Slab: 1464904 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879712 kB' 'KernelStack: 27376 kB' 'PageTables: 9156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12751264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236072 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.430 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.697 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104727452 kB' 'MemAvailable: 108445060 kB' 'Buffers: 2704 kB' 'Cached: 14797784 kB' 'SwapCached: 0 kB' 'Active: 11661744 kB' 'Inactive: 3693560 kB' 'Active(anon): 11181944 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558060 kB' 'Mapped: 226092 kB' 'Shmem: 10627128 kB' 'KReclaimable: 585192 kB' 'Slab: 1464908 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879716 kB' 'KernelStack: 27232 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12753016 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236072 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.698 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.699 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:48.700 nr_hugepages=1024 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.700 resv_hugepages=0 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.700 surplus_hugepages=0 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.700 anon_hugepages=0 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.700 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104726852 kB' 'MemAvailable: 108444460 kB' 'Buffers: 2704 kB' 'Cached: 14797804 kB' 'SwapCached: 0 kB' 'Active: 11661616 kB' 'Inactive: 3693560 kB' 'Active(anon): 11181816 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557876 kB' 'Mapped: 226092 kB' 'Shmem: 10627148 kB' 'KReclaimable: 585192 kB' 'Slab: 1464908 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879716 kB' 'KernelStack: 27328 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12753036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236072 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.701 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.702 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57954452 kB' 'MemUsed: 7704556 kB' 'SwapCached: 0 kB' 'Active: 2720808 kB' 'Inactive: 235936 kB' 'Active(anon): 2481384 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2693096 kB' 'Mapped: 93748 kB' 'AnonPages: 266856 kB' 'Shmem: 2217736 kB' 'KernelStack: 15032 kB' 'PageTables: 5336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 784324 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.703 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:48.704 node0=1024 expecting 1024 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.704 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.049 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:52.049 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.049 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104747088 kB' 'MemAvailable: 108464696 kB' 'Buffers: 2704 kB' 'Cached: 14797916 kB' 'SwapCached: 0 kB' 'Active: 11664260 kB' 'Inactive: 3693560 kB' 'Active(anon): 11184460 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559700 kB' 'Mapped: 226244 kB' 'Shmem: 10627260 kB' 'KReclaimable: 585192 kB' 'Slab: 1464576 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879384 kB' 'KernelStack: 27120 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12752052 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235992 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.050 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104747200 kB' 'MemAvailable: 108464808 kB' 'Buffers: 2704 kB' 'Cached: 14797920 kB' 'SwapCached: 0 kB' 'Active: 11664544 kB' 'Inactive: 3693560 kB' 'Active(anon): 11184744 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560576 kB' 'Mapped: 226244 kB' 'Shmem: 10627264 kB' 'KReclaimable: 585192 kB' 'Slab: 1464576 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879384 kB' 'KernelStack: 27392 kB' 'PageTables: 9116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12753436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236232 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.051 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.052 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104746388 kB' 'MemAvailable: 108463996 kB' 'Buffers: 2704 kB' 'Cached: 14797924 kB' 'SwapCached: 0 kB' 'Active: 11663552 kB' 'Inactive: 3693560 kB' 'Active(anon): 11183752 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559888 kB' 'Mapped: 226128 kB' 'Shmem: 10627268 kB' 'KReclaimable: 585192 kB' 'Slab: 1464552 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879360 kB' 'KernelStack: 27344 kB' 'PageTables: 9320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12753828 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236200 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.053 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.054 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.055 nr_hugepages=1024 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.055 resv_hugepages=0 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.055 surplus_hugepages=0 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.055 anon_hugepages=0 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338844 kB' 'MemFree: 104745656 kB' 'MemAvailable: 108463264 kB' 'Buffers: 2704 kB' 'Cached: 14797956 kB' 'SwapCached: 0 kB' 'Active: 11663052 kB' 'Inactive: 3693560 kB' 'Active(anon): 11183252 kB' 'Inactive(anon): 0 kB' 'Active(file): 479800 kB' 'Inactive(file): 3693560 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559492 kB' 'Mapped: 226120 kB' 'Shmem: 10627300 kB' 'KReclaimable: 585192 kB' 'Slab: 1464488 kB' 'SReclaimable: 585192 kB' 'SUnreclaim: 879296 kB' 'KernelStack: 27360 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509448 kB' 'Committed_AS: 12753848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 236152 kB' 'VmallocChunk: 0 kB' 'Percpu: 156096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4574580 kB' 'DirectMap2M: 29708288 kB' 'DirectMap1G: 101711872 kB' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.055 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.056 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 57979352 kB' 'MemUsed: 7679656 kB' 'SwapCached: 0 kB' 'Active: 2722696 kB' 'Inactive: 235936 kB' 'Active(anon): 2483272 kB' 'Inactive(anon): 0 kB' 'Active(file): 239424 kB' 'Inactive(file): 235936 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2693220 kB' 'Mapped: 93776 kB' 'AnonPages: 268780 kB' 'Shmem: 2217860 kB' 'KernelStack: 15128 kB' 'PageTables: 5428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 270684 kB' 'Slab: 784116 kB' 'SReclaimable: 270684 kB' 'SUnreclaim: 513432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.057 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.058 node0=1024 expecting 1024 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.058 00:04:52.058 real 0m7.104s 00:04:52.058 user 0m2.647s 00:04:52.058 sys 0m4.437s 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.058 09:53:30 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.058 ************************************ 00:04:52.058 END TEST no_shrink_alloc 00:04:52.058 ************************************ 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.058 09:53:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.058 00:04:52.058 real 0m26.891s 00:04:52.058 user 0m10.391s 00:04:52.058 sys 0m16.738s 00:04:52.058 09:53:31 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.058 09:53:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.059 ************************************ 00:04:52.059 END TEST hugepages 00:04:52.059 ************************************ 00:04:52.059 09:53:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.059 09:53:31 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.059 09:53:31 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.059 09:53:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.059 ************************************ 00:04:52.059 START TEST driver 00:04:52.059 ************************************ 00:04:52.059 09:53:31 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.320 * Looking for test storage... 00:04:52.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.320 09:53:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:52.320 09:53:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.320 09:53:31 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:56.530 09:53:35 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:56.530 09:53:35 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.530 09:53:35 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.530 09:53:35 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:56.530 ************************************ 00:04:56.530 START TEST guess_driver 00:04:56.530 ************************************ 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:56.530 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:56.530 Looking for driver=vfio-pci 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.530 09:53:35 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:59.833 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:38 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.095 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.096 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.096 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.357 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:00.357 09:53:39 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:00.357 09:53:39 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.357 09:53:39 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.652 00:05:05.652 real 0m8.797s 00:05:05.652 user 0m2.970s 00:05:05.652 sys 0m5.048s 00:05:05.652 09:53:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.652 09:53:44 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.652 ************************************ 00:05:05.652 END TEST guess_driver 00:05:05.652 ************************************ 00:05:05.652 00:05:05.652 real 0m13.306s 00:05:05.652 user 0m4.115s 00:05:05.652 sys 0m7.510s 00:05:05.652 09:53:44 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.652 09:53:44 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.652 ************************************ 00:05:05.652 END TEST driver 00:05:05.652 ************************************ 00:05:05.652 09:53:44 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:05.652 09:53:44 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.652 09:53:44 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.652 09:53:44 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:05.652 ************************************ 00:05:05.652 START TEST devices 00:05:05.652 ************************************ 00:05:05.652 09:53:44 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:05.652 * Looking for test storage... 00:05:05.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:05.652 09:53:44 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:05.652 09:53:44 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:05.652 09:53:44 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.652 09:53:44 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:09.864 09:53:48 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:09.864 No valid GPT data, bailing 00:05:09.864 09:53:48 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:09.864 09:53:48 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:09.864 09:53:48 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:09.864 09:53:48 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.864 09:53:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:09.864 ************************************ 00:05:09.864 START TEST nvme_mount 00:05:09.864 ************************************ 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.864 09:53:48 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:10.435 Creating new GPT entries in memory. 00:05:10.435 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.435 other utilities. 00:05:10.435 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.435 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.435 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.435 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.435 09:53:49 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:11.378 Creating new GPT entries in memory. 00:05:11.378 The operation has completed successfully. 00:05:11.378 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:11.378 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.378 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1057566 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.639 09:53:50 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:14.945 09:53:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.207 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.207 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.468 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.468 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.468 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.468 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.468 09:53:54 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:18.836 09:53:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.098 09:53:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.406 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.407 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.407 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.407 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.407 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.407 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:22.407 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.979 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.979 00:05:22.979 real 0m13.377s 00:05:22.979 user 0m4.101s 00:05:22.979 sys 0m7.154s 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.979 09:54:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:22.979 ************************************ 00:05:22.979 END TEST nvme_mount 00:05:22.979 ************************************ 00:05:22.979 09:54:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:22.979 09:54:01 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.979 09:54:01 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.979 09:54:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:22.979 ************************************ 00:05:22.979 START TEST dm_mount 00:05:22.979 ************************************ 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.979 09:54:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:23.921 Creating new GPT entries in memory. 00:05:23.921 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:23.921 other utilities. 00:05:23.921 09:54:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:23.921 09:54:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.921 09:54:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.921 09:54:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.921 09:54:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:24.864 Creating new GPT entries in memory. 00:05:24.864 The operation has completed successfully. 00:05:24.864 09:54:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:24.864 09:54:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:24.864 09:54:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:24.864 09:54:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:24.864 09:54:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:26.256 The operation has completed successfully. 00:05:26.256 09:54:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:26.256 09:54:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.256 09:54:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1062639 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:26.256 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.257 09:54:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:28.805 09:54:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.065 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.065 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:29.065 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:29.065 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:29.066 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.327 09:54:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:32.630 09:54:11 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:32.892 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.153 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.153 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.153 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.153 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.153 09:54:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.153 00:05:33.153 real 0m10.161s 00:05:33.153 user 0m2.524s 00:05:33.153 sys 0m4.607s 00:05:33.153 09:54:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.153 09:54:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:33.153 ************************************ 00:05:33.153 END TEST dm_mount 00:05:33.153 ************************************ 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.153 09:54:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.414 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:33.414 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:33.414 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.414 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.414 09:54:12 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:33.414 00:05:33.414 real 0m27.926s 00:05:33.414 user 0m8.143s 00:05:33.414 sys 0m14.495s 00:05:33.414 09:54:12 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.414 09:54:12 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:33.414 ************************************ 00:05:33.414 END TEST devices 00:05:33.414 ************************************ 00:05:33.414 00:05:33.414 real 1m34.291s 00:05:33.414 user 0m31.276s 00:05:33.415 sys 0m54.016s 00:05:33.415 09:54:12 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.415 09:54:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:33.415 ************************************ 00:05:33.415 END TEST setup.sh 00:05:33.415 ************************************ 00:05:33.415 09:54:12 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:36.718 Hugepages 00:05:36.718 node hugesize free / total 00:05:36.718 node0 1048576kB 0 / 0 00:05:36.718 node0 2048kB 2048 / 2048 00:05:36.718 node1 1048576kB 0 / 0 00:05:36.718 node1 2048kB 0 / 0 00:05:36.718 00:05:36.718 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:36.718 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:36.718 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:36.980 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:36.980 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:36.980 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:36.980 09:54:16 -- spdk/autotest.sh@130 -- # uname -s 00:05:36.980 09:54:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:36.980 09:54:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:36.980 09:54:16 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:40.330 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:40.330 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:40.591 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:42.506 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:42.506 09:54:21 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:43.449 09:54:22 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:43.449 09:54:22 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:43.449 09:54:22 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.449 09:54:22 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:43.449 09:54:22 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:43.449 09:54:22 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:43.449 09:54:22 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.449 09:54:22 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.449 09:54:22 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:43.711 09:54:22 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:43.711 09:54:22 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:43.711 09:54:22 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.019 Waiting for block devices as requested 00:05:47.019 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:47.019 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:47.019 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:47.019 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:47.280 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:47.280 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:47.280 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:47.542 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:47.542 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:47.803 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:47.803 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:47.803 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:47.803 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:48.064 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:48.064 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:48.064 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:48.064 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:48.325 09:54:27 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:48.325 09:54:27 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:48.587 09:54:27 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:48.587 09:54:27 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:48.587 09:54:27 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:48.587 09:54:27 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:48.587 09:54:27 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:48.587 09:54:27 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:48.587 09:54:27 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:48.587 09:54:27 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:48.587 09:54:27 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:48.587 09:54:27 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:48.587 09:54:27 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:48.587 09:54:27 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:48.587 09:54:27 -- common/autotest_common.sh@1557 -- # continue 00:05:48.587 09:54:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:48.587 09:54:27 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:48.587 09:54:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.587 09:54:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:48.587 09:54:27 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.587 09:54:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.587 09:54:27 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:51.892 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:51.892 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:51.892 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:51.892 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:51.892 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:51.893 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:52.153 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:52.153 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:52.153 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:52.153 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:52.153 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:52.413 09:54:31 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:52.413 09:54:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.413 09:54:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.413 09:54:31 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:52.413 09:54:31 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:52.413 09:54:31 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:52.413 09:54:31 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:52.413 09:54:31 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:52.413 09:54:31 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:52.413 09:54:31 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:52.413 09:54:31 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:52.413 09:54:31 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:52.413 09:54:31 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:52.413 09:54:31 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:52.413 09:54:31 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:52.413 09:54:31 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:52.413 09:54:31 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:52.413 09:54:31 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:52.413 09:54:31 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:52.413 09:54:31 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:52.413 09:54:31 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:52.413 09:54:31 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:52.413 09:54:31 -- common/autotest_common.sh@1593 -- # return 0 00:05:52.413 09:54:31 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:52.413 09:54:31 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:52.413 09:54:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:52.413 09:54:31 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:52.413 09:54:31 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:52.413 09:54:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.413 09:54:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.413 09:54:31 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:52.413 09:54:31 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:52.413 09:54:31 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.413 09:54:31 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.413 09:54:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.674 ************************************ 00:05:52.674 START TEST env 00:05:52.674 ************************************ 00:05:52.674 09:54:31 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:52.674 * Looking for test storage... 00:05:52.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:52.674 09:54:31 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:52.674 09:54:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.674 09:54:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.674 09:54:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.674 ************************************ 00:05:52.674 START TEST env_memory 00:05:52.674 ************************************ 00:05:52.674 09:54:31 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:52.674 00:05:52.674 00:05:52.674 CUnit - A unit testing framework for C - Version 2.1-3 00:05:52.674 http://cunit.sourceforge.net/ 00:05:52.674 00:05:52.674 00:05:52.674 Suite: memory 00:05:52.674 Test: alloc and free memory map ...[2024-07-25 09:54:31.757310] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:52.674 passed 00:05:52.674 Test: mem map translation ...[2024-07-25 09:54:31.782987] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:52.674 [2024-07-25 09:54:31.783016] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:52.674 [2024-07-25 09:54:31.783063] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:52.674 [2024-07-25 09:54:31.783071] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:52.936 passed 00:05:52.936 Test: mem map registration ...[2024-07-25 09:54:31.838336] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:52.936 [2024-07-25 09:54:31.838366] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:52.936 passed 00:05:52.936 Test: mem map adjacent registrations ...passed 00:05:52.936 00:05:52.936 Run Summary: Type Total Ran Passed Failed Inactive 00:05:52.936 suites 1 1 n/a 0 0 00:05:52.936 tests 4 4 4 0 0 00:05:52.936 asserts 152 152 152 0 n/a 00:05:52.936 00:05:52.936 Elapsed time = 0.192 seconds 00:05:52.936 00:05:52.936 real 0m0.207s 00:05:52.936 user 0m0.193s 00:05:52.936 sys 0m0.013s 00:05:52.936 09:54:31 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.936 09:54:31 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:52.936 ************************************ 00:05:52.936 END TEST env_memory 00:05:52.936 ************************************ 00:05:52.936 09:54:31 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:52.936 09:54:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:52.936 09:54:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.936 09:54:31 env -- common/autotest_common.sh@10 -- # set +x 00:05:52.936 ************************************ 00:05:52.936 START TEST env_vtophys 00:05:52.936 ************************************ 00:05:52.936 09:54:31 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:52.936 EAL: lib.eal log level changed from notice to debug 00:05:52.936 EAL: Detected lcore 0 as core 0 on socket 0 00:05:52.936 EAL: Detected lcore 1 as core 1 on socket 0 00:05:52.936 EAL: Detected lcore 2 as core 2 on socket 0 00:05:52.936 EAL: Detected lcore 3 as core 3 on socket 0 00:05:52.936 EAL: Detected lcore 4 as core 4 on socket 0 00:05:52.936 EAL: Detected lcore 5 as core 5 on socket 0 00:05:52.936 EAL: Detected lcore 6 as core 6 on socket 0 00:05:52.936 EAL: Detected lcore 7 as core 7 on socket 0 00:05:52.936 EAL: Detected lcore 8 as core 8 on socket 0 00:05:52.936 EAL: Detected lcore 9 as core 9 on socket 0 00:05:52.936 EAL: Detected lcore 10 as core 10 on socket 0 00:05:52.936 EAL: Detected lcore 11 as core 11 on socket 0 00:05:52.936 EAL: Detected lcore 12 as core 12 on socket 0 00:05:52.936 EAL: Detected lcore 13 as core 13 on socket 0 00:05:52.936 EAL: Detected lcore 14 as core 14 on socket 0 00:05:52.936 EAL: Detected lcore 15 as core 15 on socket 0 00:05:52.936 EAL: Detected lcore 16 as core 16 on socket 0 00:05:52.936 EAL: Detected lcore 17 as core 17 on socket 0 00:05:52.936 EAL: Detected lcore 18 as core 18 on socket 0 00:05:52.936 EAL: Detected lcore 19 as core 19 on socket 0 00:05:52.936 EAL: Detected lcore 20 as core 20 on socket 0 00:05:52.936 EAL: Detected lcore 21 as core 21 on socket 0 00:05:52.936 EAL: Detected lcore 22 as core 22 on socket 0 00:05:52.936 EAL: Detected lcore 23 as core 23 on socket 0 00:05:52.936 EAL: Detected lcore 24 as core 24 on socket 0 00:05:52.937 EAL: Detected lcore 25 as core 25 on socket 0 00:05:52.937 EAL: Detected lcore 26 as core 26 on socket 0 00:05:52.937 EAL: Detected lcore 27 as core 27 on socket 0 00:05:52.937 EAL: Detected lcore 28 as core 28 on socket 0 00:05:52.937 EAL: Detected lcore 29 as core 29 on socket 0 00:05:52.937 EAL: Detected lcore 30 as core 30 on socket 0 00:05:52.937 EAL: Detected lcore 31 as core 31 on socket 0 00:05:52.937 EAL: Detected lcore 32 as core 32 on socket 0 00:05:52.937 EAL: Detected lcore 33 as core 33 on socket 0 00:05:52.937 EAL: Detected lcore 34 as core 34 on socket 0 00:05:52.937 EAL: Detected lcore 35 as core 35 on socket 0 00:05:52.937 EAL: Detected lcore 36 as core 0 on socket 1 00:05:52.937 EAL: Detected lcore 37 as core 1 on socket 1 00:05:52.937 EAL: Detected lcore 38 as core 2 on socket 1 00:05:52.937 EAL: Detected lcore 39 as core 3 on socket 1 00:05:52.937 EAL: Detected lcore 40 as core 4 on socket 1 00:05:52.937 EAL: Detected lcore 41 as core 5 on socket 1 00:05:52.937 EAL: Detected lcore 42 as core 6 on socket 1 00:05:52.937 EAL: Detected lcore 43 as core 7 on socket 1 00:05:52.937 EAL: Detected lcore 44 as core 8 on socket 1 00:05:52.937 EAL: Detected lcore 45 as core 9 on socket 1 00:05:52.937 EAL: Detected lcore 46 as core 10 on socket 1 00:05:52.937 EAL: Detected lcore 47 as core 11 on socket 1 00:05:52.937 EAL: Detected lcore 48 as core 12 on socket 1 00:05:52.937 EAL: Detected lcore 49 as core 13 on socket 1 00:05:52.937 EAL: Detected lcore 50 as core 14 on socket 1 00:05:52.937 EAL: Detected lcore 51 as core 15 on socket 1 00:05:52.937 EAL: Detected lcore 52 as core 16 on socket 1 00:05:52.937 EAL: Detected lcore 53 as core 17 on socket 1 00:05:52.937 EAL: Detected lcore 54 as core 18 on socket 1 00:05:52.937 EAL: Detected lcore 55 as core 19 on socket 1 00:05:52.937 EAL: Detected lcore 56 as core 20 on socket 1 00:05:52.937 EAL: Detected lcore 57 as core 21 on socket 1 00:05:52.937 EAL: Detected lcore 58 as core 22 on socket 1 00:05:52.937 EAL: Detected lcore 59 as core 23 on socket 1 00:05:52.937 EAL: Detected lcore 60 as core 24 on socket 1 00:05:52.937 EAL: Detected lcore 61 as core 25 on socket 1 00:05:52.937 EAL: Detected lcore 62 as core 26 on socket 1 00:05:52.937 EAL: Detected lcore 63 as core 27 on socket 1 00:05:52.937 EAL: Detected lcore 64 as core 28 on socket 1 00:05:52.937 EAL: Detected lcore 65 as core 29 on socket 1 00:05:52.937 EAL: Detected lcore 66 as core 30 on socket 1 00:05:52.937 EAL: Detected lcore 67 as core 31 on socket 1 00:05:52.937 EAL: Detected lcore 68 as core 32 on socket 1 00:05:52.937 EAL: Detected lcore 69 as core 33 on socket 1 00:05:52.937 EAL: Detected lcore 70 as core 34 on socket 1 00:05:52.937 EAL: Detected lcore 71 as core 35 on socket 1 00:05:52.937 EAL: Detected lcore 72 as core 0 on socket 0 00:05:52.937 EAL: Detected lcore 73 as core 1 on socket 0 00:05:52.937 EAL: Detected lcore 74 as core 2 on socket 0 00:05:52.937 EAL: Detected lcore 75 as core 3 on socket 0 00:05:52.937 EAL: Detected lcore 76 as core 4 on socket 0 00:05:52.937 EAL: Detected lcore 77 as core 5 on socket 0 00:05:52.937 EAL: Detected lcore 78 as core 6 on socket 0 00:05:52.937 EAL: Detected lcore 79 as core 7 on socket 0 00:05:52.937 EAL: Detected lcore 80 as core 8 on socket 0 00:05:52.937 EAL: Detected lcore 81 as core 9 on socket 0 00:05:52.937 EAL: Detected lcore 82 as core 10 on socket 0 00:05:52.937 EAL: Detected lcore 83 as core 11 on socket 0 00:05:52.937 EAL: Detected lcore 84 as core 12 on socket 0 00:05:52.937 EAL: Detected lcore 85 as core 13 on socket 0 00:05:52.937 EAL: Detected lcore 86 as core 14 on socket 0 00:05:52.937 EAL: Detected lcore 87 as core 15 on socket 0 00:05:52.937 EAL: Detected lcore 88 as core 16 on socket 0 00:05:52.937 EAL: Detected lcore 89 as core 17 on socket 0 00:05:52.937 EAL: Detected lcore 90 as core 18 on socket 0 00:05:52.937 EAL: Detected lcore 91 as core 19 on socket 0 00:05:52.937 EAL: Detected lcore 92 as core 20 on socket 0 00:05:52.937 EAL: Detected lcore 93 as core 21 on socket 0 00:05:52.937 EAL: Detected lcore 94 as core 22 on socket 0 00:05:52.937 EAL: Detected lcore 95 as core 23 on socket 0 00:05:52.937 EAL: Detected lcore 96 as core 24 on socket 0 00:05:52.937 EAL: Detected lcore 97 as core 25 on socket 0 00:05:52.937 EAL: Detected lcore 98 as core 26 on socket 0 00:05:52.937 EAL: Detected lcore 99 as core 27 on socket 0 00:05:52.937 EAL: Detected lcore 100 as core 28 on socket 0 00:05:52.937 EAL: Detected lcore 101 as core 29 on socket 0 00:05:52.937 EAL: Detected lcore 102 as core 30 on socket 0 00:05:52.937 EAL: Detected lcore 103 as core 31 on socket 0 00:05:52.937 EAL: Detected lcore 104 as core 32 on socket 0 00:05:52.937 EAL: Detected lcore 105 as core 33 on socket 0 00:05:52.937 EAL: Detected lcore 106 as core 34 on socket 0 00:05:52.937 EAL: Detected lcore 107 as core 35 on socket 0 00:05:52.937 EAL: Detected lcore 108 as core 0 on socket 1 00:05:52.937 EAL: Detected lcore 109 as core 1 on socket 1 00:05:52.937 EAL: Detected lcore 110 as core 2 on socket 1 00:05:52.937 EAL: Detected lcore 111 as core 3 on socket 1 00:05:52.937 EAL: Detected lcore 112 as core 4 on socket 1 00:05:52.937 EAL: Detected lcore 113 as core 5 on socket 1 00:05:52.937 EAL: Detected lcore 114 as core 6 on socket 1 00:05:52.937 EAL: Detected lcore 115 as core 7 on socket 1 00:05:52.937 EAL: Detected lcore 116 as core 8 on socket 1 00:05:52.937 EAL: Detected lcore 117 as core 9 on socket 1 00:05:52.937 EAL: Detected lcore 118 as core 10 on socket 1 00:05:52.937 EAL: Detected lcore 119 as core 11 on socket 1 00:05:52.937 EAL: Detected lcore 120 as core 12 on socket 1 00:05:52.937 EAL: Detected lcore 121 as core 13 on socket 1 00:05:52.937 EAL: Detected lcore 122 as core 14 on socket 1 00:05:52.937 EAL: Detected lcore 123 as core 15 on socket 1 00:05:52.937 EAL: Detected lcore 124 as core 16 on socket 1 00:05:52.937 EAL: Detected lcore 125 as core 17 on socket 1 00:05:52.937 EAL: Detected lcore 126 as core 18 on socket 1 00:05:52.937 EAL: Detected lcore 127 as core 19 on socket 1 00:05:52.937 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:52.937 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:52.937 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:52.937 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:52.937 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:52.937 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:52.937 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:52.937 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:52.937 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:52.937 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:52.937 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:52.937 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:52.937 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:52.937 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:52.937 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:52.937 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:52.937 EAL: Maximum logical cores by configuration: 128 00:05:52.937 EAL: Detected CPU lcores: 128 00:05:52.937 EAL: Detected NUMA nodes: 2 00:05:52.937 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:52.937 EAL: Detected shared linkage of DPDK 00:05:52.937 EAL: No shared files mode enabled, IPC will be disabled 00:05:52.937 EAL: Bus pci wants IOVA as 'DC' 00:05:52.937 EAL: Buses did not request a specific IOVA mode. 00:05:52.937 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:52.937 EAL: Selected IOVA mode 'VA' 00:05:52.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.937 EAL: Probing VFIO support... 00:05:52.937 EAL: IOMMU type 1 (Type 1) is supported 00:05:52.937 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:52.937 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:52.937 EAL: VFIO support initialized 00:05:52.937 EAL: Ask a virtual area of 0x2e000 bytes 00:05:52.937 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:52.937 EAL: Setting up physically contiguous memory... 00:05:52.937 EAL: Setting maximum number of open files to 524288 00:05:52.937 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:52.937 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:52.937 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:52.937 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.937 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:52.937 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.937 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.937 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:52.937 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:52.937 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.937 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:52.937 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.937 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.937 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:52.937 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:52.937 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.937 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:52.937 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.937 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.937 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:52.937 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:52.937 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.937 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:52.937 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:52.937 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.938 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:52.938 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:52.938 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:52.938 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.938 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:52.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.938 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.938 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:52.938 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:52.938 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.938 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:52.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.938 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.938 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:52.938 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:52.938 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.938 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:52.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.938 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.938 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:52.938 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:52.938 EAL: Ask a virtual area of 0x61000 bytes 00:05:52.938 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:52.938 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:52.938 EAL: Ask a virtual area of 0x400000000 bytes 00:05:52.938 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:52.938 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:52.938 EAL: Hugepages will be freed exactly as allocated. 00:05:52.938 EAL: No shared files mode enabled, IPC is disabled 00:05:52.938 EAL: No shared files mode enabled, IPC is disabled 00:05:52.938 EAL: TSC frequency is ~2400000 KHz 00:05:52.938 EAL: Main lcore 0 is ready (tid=7fb3473dda00;cpuset=[0]) 00:05:52.938 EAL: Trying to obtain current memory policy. 00:05:52.938 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:52.938 EAL: Restoring previous memory policy: 0 00:05:52.938 EAL: request: mp_malloc_sync 00:05:52.938 EAL: No shared files mode enabled, IPC is disabled 00:05:52.938 EAL: Heap on socket 0 was expanded by 2MB 00:05:52.938 EAL: No shared files mode enabled, IPC is disabled 00:05:53.199 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:53.199 EAL: Mem event callback 'spdk:(nil)' registered 00:05:53.199 00:05:53.199 00:05:53.199 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.199 http://cunit.sourceforge.net/ 00:05:53.199 00:05:53.199 00:05:53.199 Suite: components_suite 00:05:53.199 Test: vtophys_malloc_test ...passed 00:05:53.199 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:53.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.199 EAL: Restoring previous memory policy: 4 00:05:53.199 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.199 EAL: request: mp_malloc_sync 00:05:53.199 EAL: No shared files mode enabled, IPC is disabled 00:05:53.199 EAL: Heap on socket 0 was expanded by 4MB 00:05:53.199 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.199 EAL: request: mp_malloc_sync 00:05:53.199 EAL: No shared files mode enabled, IPC is disabled 00:05:53.199 EAL: Heap on socket 0 was shrunk by 4MB 00:05:53.199 EAL: Trying to obtain current memory policy. 00:05:53.199 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.199 EAL: Restoring previous memory policy: 4 00:05:53.199 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.199 EAL: request: mp_malloc_sync 00:05:53.199 EAL: No shared files mode enabled, IPC is disabled 00:05:53.199 EAL: Heap on socket 0 was expanded by 6MB 00:05:53.199 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 6MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 10MB 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 10MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 18MB 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 18MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 34MB 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 34MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 66MB 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 66MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 130MB 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 130MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 258MB 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was shrunk by 258MB 00:05:53.200 EAL: Trying to obtain current memory policy. 00:05:53.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.200 EAL: Restoring previous memory policy: 4 00:05:53.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.200 EAL: request: mp_malloc_sync 00:05:53.200 EAL: No shared files mode enabled, IPC is disabled 00:05:53.200 EAL: Heap on socket 0 was expanded by 514MB 00:05:53.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.462 EAL: request: mp_malloc_sync 00:05:53.462 EAL: No shared files mode enabled, IPC is disabled 00:05:53.462 EAL: Heap on socket 0 was shrunk by 514MB 00:05:53.462 EAL: Trying to obtain current memory policy. 00:05:53.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:53.462 EAL: Restoring previous memory policy: 4 00:05:53.462 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.462 EAL: request: mp_malloc_sync 00:05:53.462 EAL: No shared files mode enabled, IPC is disabled 00:05:53.462 EAL: Heap on socket 0 was expanded by 1026MB 00:05:53.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.723 EAL: request: mp_malloc_sync 00:05:53.723 EAL: No shared files mode enabled, IPC is disabled 00:05:53.723 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:53.723 passed 00:05:53.723 00:05:53.723 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.723 suites 1 1 n/a 0 0 00:05:53.723 tests 2 2 2 0 0 00:05:53.723 asserts 497 497 497 0 n/a 00:05:53.723 00:05:53.723 Elapsed time = 0.658 seconds 00:05:53.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:53.723 EAL: request: mp_malloc_sync 00:05:53.723 EAL: No shared files mode enabled, IPC is disabled 00:05:53.723 EAL: Heap on socket 0 was shrunk by 2MB 00:05:53.723 EAL: No shared files mode enabled, IPC is disabled 00:05:53.723 EAL: No shared files mode enabled, IPC is disabled 00:05:53.723 EAL: No shared files mode enabled, IPC is disabled 00:05:53.723 00:05:53.723 real 0m0.775s 00:05:53.723 user 0m0.400s 00:05:53.723 sys 0m0.351s 00:05:53.723 09:54:32 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.723 09:54:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:53.723 ************************************ 00:05:53.723 END TEST env_vtophys 00:05:53.723 ************************************ 00:05:53.723 09:54:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:53.723 09:54:32 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.723 09:54:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.723 09:54:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.723 ************************************ 00:05:53.723 START TEST env_pci 00:05:53.723 ************************************ 00:05:53.723 09:54:32 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:53.723 00:05:53.723 00:05:53.723 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.723 http://cunit.sourceforge.net/ 00:05:53.723 00:05:53.723 00:05:53.723 Suite: pci 00:05:53.984 Test: pci_hook ...[2024-07-25 09:54:32.858450] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1074394 has claimed it 00:05:53.984 EAL: Cannot find device (10000:00:01.0) 00:05:53.984 EAL: Failed to attach device on primary process 00:05:53.984 passed 00:05:53.984 00:05:53.984 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.984 suites 1 1 n/a 0 0 00:05:53.984 tests 1 1 1 0 0 00:05:53.984 asserts 25 25 25 0 n/a 00:05:53.984 00:05:53.984 Elapsed time = 0.029 seconds 00:05:53.984 00:05:53.984 real 0m0.049s 00:05:53.984 user 0m0.016s 00:05:53.984 sys 0m0.033s 00:05:53.984 09:54:32 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.984 09:54:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:53.984 ************************************ 00:05:53.984 END TEST env_pci 00:05:53.984 ************************************ 00:05:53.984 09:54:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:53.984 09:54:32 env -- env/env.sh@15 -- # uname 00:05:53.984 09:54:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:53.984 09:54:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:53.984 09:54:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.984 09:54:32 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:53.984 09:54:32 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.984 09:54:32 env -- common/autotest_common.sh@10 -- # set +x 00:05:53.985 ************************************ 00:05:53.985 START TEST env_dpdk_post_init 00:05:53.985 ************************************ 00:05:53.985 09:54:32 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:53.985 EAL: Detected CPU lcores: 128 00:05:53.985 EAL: Detected NUMA nodes: 2 00:05:53.985 EAL: Detected shared linkage of DPDK 00:05:53.985 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:53.985 EAL: Selected IOVA mode 'VA' 00:05:53.985 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.985 EAL: VFIO support initialized 00:05:53.985 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:53.985 EAL: Using IOMMU type 1 (Type 1) 00:05:54.246 EAL: Ignore mapping IO port bar(1) 00:05:54.246 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:54.506 EAL: Ignore mapping IO port bar(1) 00:05:54.506 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:54.506 EAL: Ignore mapping IO port bar(1) 00:05:54.767 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:54.767 EAL: Ignore mapping IO port bar(1) 00:05:55.027 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:55.027 EAL: Ignore mapping IO port bar(1) 00:05:55.028 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:55.288 EAL: Ignore mapping IO port bar(1) 00:05:55.288 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:55.549 EAL: Ignore mapping IO port bar(1) 00:05:55.549 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:55.810 EAL: Ignore mapping IO port bar(1) 00:05:55.810 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:56.070 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:56.070 EAL: Ignore mapping IO port bar(1) 00:05:56.330 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:56.330 EAL: Ignore mapping IO port bar(1) 00:05:56.590 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:56.590 EAL: Ignore mapping IO port bar(1) 00:05:56.861 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:56.861 EAL: Ignore mapping IO port bar(1) 00:05:56.861 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:57.161 EAL: Ignore mapping IO port bar(1) 00:05:57.161 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:57.422 EAL: Ignore mapping IO port bar(1) 00:05:57.422 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:57.422 EAL: Ignore mapping IO port bar(1) 00:05:57.684 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:57.684 EAL: Ignore mapping IO port bar(1) 00:05:57.945 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:57.945 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:57.945 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:57.945 Starting DPDK initialization... 00:05:57.945 Starting SPDK post initialization... 00:05:57.945 SPDK NVMe probe 00:05:57.945 Attaching to 0000:65:00.0 00:05:57.945 Attached to 0000:65:00.0 00:05:57.945 Cleaning up... 00:05:59.859 00:05:59.859 real 0m5.685s 00:05:59.859 user 0m0.164s 00:05:59.859 sys 0m0.061s 00:05:59.859 09:54:38 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.859 09:54:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:59.859 ************************************ 00:05:59.859 END TEST env_dpdk_post_init 00:05:59.859 ************************************ 00:05:59.859 09:54:38 env -- env/env.sh@26 -- # uname 00:05:59.859 09:54:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:59.859 09:54:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.859 09:54:38 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.859 09:54:38 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.859 09:54:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.859 ************************************ 00:05:59.859 START TEST env_mem_callbacks 00:05:59.859 ************************************ 00:05:59.859 09:54:38 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.859 EAL: Detected CPU lcores: 128 00:05:59.859 EAL: Detected NUMA nodes: 2 00:05:59.859 EAL: Detected shared linkage of DPDK 00:05:59.859 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.859 EAL: Selected IOVA mode 'VA' 00:05:59.859 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.859 EAL: VFIO support initialized 00:05:59.859 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.859 00:05:59.859 00:05:59.859 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.859 http://cunit.sourceforge.net/ 00:05:59.859 00:05:59.859 00:05:59.859 Suite: memory 00:05:59.859 Test: test ... 00:05:59.859 register 0x200000200000 2097152 00:05:59.859 malloc 3145728 00:05:59.859 register 0x200000400000 4194304 00:05:59.859 buf 0x200000500000 len 3145728 PASSED 00:05:59.859 malloc 64 00:05:59.859 buf 0x2000004fff40 len 64 PASSED 00:05:59.859 malloc 4194304 00:05:59.859 register 0x200000800000 6291456 00:05:59.859 buf 0x200000a00000 len 4194304 PASSED 00:05:59.859 free 0x200000500000 3145728 00:05:59.859 free 0x2000004fff40 64 00:05:59.859 unregister 0x200000400000 4194304 PASSED 00:05:59.859 free 0x200000a00000 4194304 00:05:59.859 unregister 0x200000800000 6291456 PASSED 00:05:59.859 malloc 8388608 00:05:59.859 register 0x200000400000 10485760 00:05:59.859 buf 0x200000600000 len 8388608 PASSED 00:05:59.860 free 0x200000600000 8388608 00:05:59.860 unregister 0x200000400000 10485760 PASSED 00:05:59.860 passed 00:05:59.860 00:05:59.860 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.860 suites 1 1 n/a 0 0 00:05:59.860 tests 1 1 1 0 0 00:05:59.860 asserts 15 15 15 0 n/a 00:05:59.860 00:05:59.860 Elapsed time = 0.008 seconds 00:05:59.860 00:05:59.860 real 0m0.064s 00:05:59.860 user 0m0.025s 00:05:59.860 sys 0m0.040s 00:05:59.860 09:54:38 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.860 09:54:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:59.860 ************************************ 00:05:59.860 END TEST env_mem_callbacks 00:05:59.860 ************************************ 00:05:59.860 00:05:59.860 real 0m7.266s 00:05:59.860 user 0m0.976s 00:05:59.860 sys 0m0.835s 00:05:59.860 09:54:38 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.860 09:54:38 env -- common/autotest_common.sh@10 -- # set +x 00:05:59.860 ************************************ 00:05:59.860 END TEST env 00:05:59.860 ************************************ 00:05:59.860 09:54:38 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:59.860 09:54:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.860 09:54:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.860 09:54:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.860 ************************************ 00:05:59.860 START TEST rpc 00:05:59.860 ************************************ 00:05:59.860 09:54:38 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:00.121 * Looking for test storage... 00:06:00.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.121 09:54:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:00.121 09:54:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1075845 00:06:00.121 09:54:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.121 09:54:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1075845 00:06:00.121 09:54:39 rpc -- common/autotest_common.sh@831 -- # '[' -z 1075845 ']' 00:06:00.121 09:54:39 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.121 09:54:39 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:00.121 09:54:39 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.121 09:54:39 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:00.121 09:54:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.121 [2024-07-25 09:54:39.050515] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:00.121 [2024-07-25 09:54:39.050570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1075845 ] 00:06:00.121 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.121 [2024-07-25 09:54:39.109995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.121 [2024-07-25 09:54:39.175396] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:00.121 [2024-07-25 09:54:39.175433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1075845' to capture a snapshot of events at runtime. 00:06:00.121 [2024-07-25 09:54:39.175440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.121 [2024-07-25 09:54:39.175447] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.121 [2024-07-25 09:54:39.175452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1075845 for offline analysis/debug. 00:06:00.121 [2024-07-25 09:54:39.175474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.692 09:54:39 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.692 09:54:39 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:00.692 09:54:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.692 09:54:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:00.692 09:54:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:00.692 09:54:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:00.692 09:54:39 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:00.692 09:54:39 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:00.692 09:54:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.953 ************************************ 00:06:00.953 START TEST rpc_integrity 00:06:00.953 ************************************ 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.953 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.953 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:00.953 { 00:06:00.953 "name": "Malloc0", 00:06:00.953 "aliases": [ 00:06:00.953 "af99aced-698a-486b-8a95-250ff8d876f3" 00:06:00.953 ], 00:06:00.953 "product_name": "Malloc disk", 00:06:00.953 "block_size": 512, 00:06:00.953 "num_blocks": 16384, 00:06:00.953 "uuid": "af99aced-698a-486b-8a95-250ff8d876f3", 00:06:00.953 "assigned_rate_limits": { 00:06:00.953 "rw_ios_per_sec": 0, 00:06:00.953 "rw_mbytes_per_sec": 0, 00:06:00.953 "r_mbytes_per_sec": 0, 00:06:00.953 "w_mbytes_per_sec": 0 00:06:00.953 }, 00:06:00.953 "claimed": false, 00:06:00.953 "zoned": false, 00:06:00.953 "supported_io_types": { 00:06:00.953 "read": true, 00:06:00.953 "write": true, 00:06:00.953 "unmap": true, 00:06:00.953 "flush": true, 00:06:00.953 "reset": true, 00:06:00.953 "nvme_admin": false, 00:06:00.953 "nvme_io": false, 00:06:00.953 "nvme_io_md": false, 00:06:00.953 "write_zeroes": true, 00:06:00.953 "zcopy": true, 00:06:00.953 "get_zone_info": false, 00:06:00.953 "zone_management": false, 00:06:00.953 "zone_append": false, 00:06:00.953 "compare": false, 00:06:00.953 "compare_and_write": false, 00:06:00.953 "abort": true, 00:06:00.953 "seek_hole": false, 00:06:00.953 "seek_data": false, 00:06:00.953 "copy": true, 00:06:00.953 "nvme_iov_md": false 00:06:00.953 }, 00:06:00.953 "memory_domains": [ 00:06:00.953 { 00:06:00.953 "dma_device_id": "system", 00:06:00.953 "dma_device_type": 1 00:06:00.954 }, 00:06:00.954 { 00:06:00.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.954 "dma_device_type": 2 00:06:00.954 } 00:06:00.954 ], 00:06:00.954 "driver_specific": {} 00:06:00.954 } 00:06:00.954 ]' 00:06:00.954 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:00.954 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:00.954 09:54:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:00.954 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.954 09:54:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.954 [2024-07-25 09:54:40.001344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:00.954 [2024-07-25 09:54:40.001377] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:00.954 [2024-07-25 09:54:40.001390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x892d80 00:06:00.954 [2024-07-25 09:54:40.001398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:00.954 [2024-07-25 09:54:40.003260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:00.954 [2024-07-25 09:54:40.003281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:00.954 Passthru0 00:06:00.954 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.954 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:00.954 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.954 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:00.954 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.954 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:00.954 { 00:06:00.954 "name": "Malloc0", 00:06:00.954 "aliases": [ 00:06:00.954 "af99aced-698a-486b-8a95-250ff8d876f3" 00:06:00.954 ], 00:06:00.954 "product_name": "Malloc disk", 00:06:00.954 "block_size": 512, 00:06:00.954 "num_blocks": 16384, 00:06:00.954 "uuid": "af99aced-698a-486b-8a95-250ff8d876f3", 00:06:00.954 "assigned_rate_limits": { 00:06:00.954 "rw_ios_per_sec": 0, 00:06:00.954 "rw_mbytes_per_sec": 0, 00:06:00.954 "r_mbytes_per_sec": 0, 00:06:00.954 "w_mbytes_per_sec": 0 00:06:00.954 }, 00:06:00.954 "claimed": true, 00:06:00.954 "claim_type": "exclusive_write", 00:06:00.954 "zoned": false, 00:06:00.954 "supported_io_types": { 00:06:00.954 "read": true, 00:06:00.954 "write": true, 00:06:00.954 "unmap": true, 00:06:00.954 "flush": true, 00:06:00.954 "reset": true, 00:06:00.954 "nvme_admin": false, 00:06:00.954 "nvme_io": false, 00:06:00.954 "nvme_io_md": false, 00:06:00.954 "write_zeroes": true, 00:06:00.954 "zcopy": true, 00:06:00.954 "get_zone_info": false, 00:06:00.954 "zone_management": false, 00:06:00.954 "zone_append": false, 00:06:00.954 "compare": false, 00:06:00.954 "compare_and_write": false, 00:06:00.954 "abort": true, 00:06:00.954 "seek_hole": false, 00:06:00.954 "seek_data": false, 00:06:00.954 "copy": true, 00:06:00.954 "nvme_iov_md": false 00:06:00.954 }, 00:06:00.954 "memory_domains": [ 00:06:00.954 { 00:06:00.954 "dma_device_id": "system", 00:06:00.954 "dma_device_type": 1 00:06:00.954 }, 00:06:00.954 { 00:06:00.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.954 "dma_device_type": 2 00:06:00.954 } 00:06:00.954 ], 00:06:00.954 "driver_specific": {} 00:06:00.954 }, 00:06:00.954 { 00:06:00.954 "name": "Passthru0", 00:06:00.954 "aliases": [ 00:06:00.954 "2111d5bb-6502-5ab7-8076-92ec0be51100" 00:06:00.954 ], 00:06:00.954 "product_name": "passthru", 00:06:00.954 "block_size": 512, 00:06:00.954 "num_blocks": 16384, 00:06:00.954 "uuid": "2111d5bb-6502-5ab7-8076-92ec0be51100", 00:06:00.954 "assigned_rate_limits": { 00:06:00.954 "rw_ios_per_sec": 0, 00:06:00.954 "rw_mbytes_per_sec": 0, 00:06:00.954 "r_mbytes_per_sec": 0, 00:06:00.954 "w_mbytes_per_sec": 0 00:06:00.954 }, 00:06:00.954 "claimed": false, 00:06:00.954 "zoned": false, 00:06:00.954 "supported_io_types": { 00:06:00.954 "read": true, 00:06:00.954 "write": true, 00:06:00.954 "unmap": true, 00:06:00.954 "flush": true, 00:06:00.954 "reset": true, 00:06:00.954 "nvme_admin": false, 00:06:00.954 "nvme_io": false, 00:06:00.954 "nvme_io_md": false, 00:06:00.954 "write_zeroes": true, 00:06:00.954 "zcopy": true, 00:06:00.954 "get_zone_info": false, 00:06:00.954 "zone_management": false, 00:06:00.954 "zone_append": false, 00:06:00.954 "compare": false, 00:06:00.954 "compare_and_write": false, 00:06:00.954 "abort": true, 00:06:00.954 "seek_hole": false, 00:06:00.954 "seek_data": false, 00:06:00.954 "copy": true, 00:06:00.954 "nvme_iov_md": false 00:06:00.954 }, 00:06:00.954 "memory_domains": [ 00:06:00.954 { 00:06:00.954 "dma_device_id": "system", 00:06:00.954 "dma_device_type": 1 00:06:00.954 }, 00:06:00.954 { 00:06:00.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:00.954 "dma_device_type": 2 00:06:00.954 } 00:06:00.954 ], 00:06:00.954 "driver_specific": { 00:06:00.954 "passthru": { 00:06:00.954 "name": "Passthru0", 00:06:00.954 "base_bdev_name": "Malloc0" 00:06:00.954 } 00:06:00.954 } 00:06:00.954 } 00:06:00.954 ]' 00:06:00.954 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:00.954 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:00.954 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:00.954 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.954 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.216 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:01.216 09:54:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.216 00:06:01.216 real 0m0.304s 00:06:01.216 user 0m0.196s 00:06:01.216 sys 0m0.039s 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 ************************************ 00:06:01.216 END TEST rpc_integrity 00:06:01.216 ************************************ 00:06:01.216 09:54:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:01.216 09:54:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.216 09:54:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.216 09:54:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 ************************************ 00:06:01.216 START TEST rpc_plugins 00:06:01.216 ************************************ 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:01.216 { 00:06:01.216 "name": "Malloc1", 00:06:01.216 "aliases": [ 00:06:01.216 "ef1c1cd8-1cbb-4d64-92fc-a500ca442fed" 00:06:01.216 ], 00:06:01.216 "product_name": "Malloc disk", 00:06:01.216 "block_size": 4096, 00:06:01.216 "num_blocks": 256, 00:06:01.216 "uuid": "ef1c1cd8-1cbb-4d64-92fc-a500ca442fed", 00:06:01.216 "assigned_rate_limits": { 00:06:01.216 "rw_ios_per_sec": 0, 00:06:01.216 "rw_mbytes_per_sec": 0, 00:06:01.216 "r_mbytes_per_sec": 0, 00:06:01.216 "w_mbytes_per_sec": 0 00:06:01.216 }, 00:06:01.216 "claimed": false, 00:06:01.216 "zoned": false, 00:06:01.216 "supported_io_types": { 00:06:01.216 "read": true, 00:06:01.216 "write": true, 00:06:01.216 "unmap": true, 00:06:01.216 "flush": true, 00:06:01.216 "reset": true, 00:06:01.216 "nvme_admin": false, 00:06:01.216 "nvme_io": false, 00:06:01.216 "nvme_io_md": false, 00:06:01.216 "write_zeroes": true, 00:06:01.216 "zcopy": true, 00:06:01.216 "get_zone_info": false, 00:06:01.216 "zone_management": false, 00:06:01.216 "zone_append": false, 00:06:01.216 "compare": false, 00:06:01.216 "compare_and_write": false, 00:06:01.216 "abort": true, 00:06:01.216 "seek_hole": false, 00:06:01.216 "seek_data": false, 00:06:01.216 "copy": true, 00:06:01.216 "nvme_iov_md": false 00:06:01.216 }, 00:06:01.216 "memory_domains": [ 00:06:01.216 { 00:06:01.216 "dma_device_id": "system", 00:06:01.216 "dma_device_type": 1 00:06:01.216 }, 00:06:01.216 { 00:06:01.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.216 "dma_device_type": 2 00:06:01.216 } 00:06:01.216 ], 00:06:01.216 "driver_specific": {} 00:06:01.216 } 00:06:01.216 ]' 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.216 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:01.216 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:01.477 09:54:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:01.477 00:06:01.477 real 0m0.115s 00:06:01.477 user 0m0.071s 00:06:01.477 sys 0m0.008s 00:06:01.477 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.477 09:54:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:01.477 ************************************ 00:06:01.477 END TEST rpc_plugins 00:06:01.477 ************************************ 00:06:01.477 09:54:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:01.477 09:54:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.477 09:54:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.477 09:54:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.477 ************************************ 00:06:01.477 START TEST rpc_trace_cmd_test 00:06:01.477 ************************************ 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.477 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:01.477 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1075845", 00:06:01.477 "tpoint_group_mask": "0x8", 00:06:01.477 "iscsi_conn": { 00:06:01.477 "mask": "0x2", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "scsi": { 00:06:01.477 "mask": "0x4", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "bdev": { 00:06:01.477 "mask": "0x8", 00:06:01.477 "tpoint_mask": "0xffffffffffffffff" 00:06:01.477 }, 00:06:01.477 "nvmf_rdma": { 00:06:01.477 "mask": "0x10", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "nvmf_tcp": { 00:06:01.477 "mask": "0x20", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "ftl": { 00:06:01.477 "mask": "0x40", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "blobfs": { 00:06:01.477 "mask": "0x80", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "dsa": { 00:06:01.477 "mask": "0x200", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.477 "thread": { 00:06:01.477 "mask": "0x400", 00:06:01.477 "tpoint_mask": "0x0" 00:06:01.477 }, 00:06:01.478 "nvme_pcie": { 00:06:01.478 "mask": "0x800", 00:06:01.478 "tpoint_mask": "0x0" 00:06:01.478 }, 00:06:01.478 "iaa": { 00:06:01.478 "mask": "0x1000", 00:06:01.478 "tpoint_mask": "0x0" 00:06:01.478 }, 00:06:01.478 "nvme_tcp": { 00:06:01.478 "mask": "0x2000", 00:06:01.478 "tpoint_mask": "0x0" 00:06:01.478 }, 00:06:01.478 "bdev_nvme": { 00:06:01.478 "mask": "0x4000", 00:06:01.478 "tpoint_mask": "0x0" 00:06:01.478 }, 00:06:01.478 "sock": { 00:06:01.478 "mask": "0x8000", 00:06:01.478 "tpoint_mask": "0x0" 00:06:01.478 } 00:06:01.478 }' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:01.478 00:06:01.478 real 0m0.179s 00:06:01.478 user 0m0.149s 00:06:01.478 sys 0m0.022s 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:01.478 09:54:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:01.478 ************************************ 00:06:01.478 END TEST rpc_trace_cmd_test 00:06:01.478 ************************************ 00:06:01.738 09:54:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:01.738 09:54:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:01.738 09:54:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:01.738 09:54:40 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:01.738 09:54:40 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:01.738 09:54:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.738 ************************************ 00:06:01.738 START TEST rpc_daemon_integrity 00:06:01.738 ************************************ 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:01.738 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.739 { 00:06:01.739 "name": "Malloc2", 00:06:01.739 "aliases": [ 00:06:01.739 "8d0809f4-0f69-4192-9311-ef3588ecd4c8" 00:06:01.739 ], 00:06:01.739 "product_name": "Malloc disk", 00:06:01.739 "block_size": 512, 00:06:01.739 "num_blocks": 16384, 00:06:01.739 "uuid": "8d0809f4-0f69-4192-9311-ef3588ecd4c8", 00:06:01.739 "assigned_rate_limits": { 00:06:01.739 "rw_ios_per_sec": 0, 00:06:01.739 "rw_mbytes_per_sec": 0, 00:06:01.739 "r_mbytes_per_sec": 0, 00:06:01.739 "w_mbytes_per_sec": 0 00:06:01.739 }, 00:06:01.739 "claimed": false, 00:06:01.739 "zoned": false, 00:06:01.739 "supported_io_types": { 00:06:01.739 "read": true, 00:06:01.739 "write": true, 00:06:01.739 "unmap": true, 00:06:01.739 "flush": true, 00:06:01.739 "reset": true, 00:06:01.739 "nvme_admin": false, 00:06:01.739 "nvme_io": false, 00:06:01.739 "nvme_io_md": false, 00:06:01.739 "write_zeroes": true, 00:06:01.739 "zcopy": true, 00:06:01.739 "get_zone_info": false, 00:06:01.739 "zone_management": false, 00:06:01.739 "zone_append": false, 00:06:01.739 "compare": false, 00:06:01.739 "compare_and_write": false, 00:06:01.739 "abort": true, 00:06:01.739 "seek_hole": false, 00:06:01.739 "seek_data": false, 00:06:01.739 "copy": true, 00:06:01.739 "nvme_iov_md": false 00:06:01.739 }, 00:06:01.739 "memory_domains": [ 00:06:01.739 { 00:06:01.739 "dma_device_id": "system", 00:06:01.739 "dma_device_type": 1 00:06:01.739 }, 00:06:01.739 { 00:06:01.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.739 "dma_device_type": 2 00:06:01.739 } 00:06:01.739 ], 00:06:01.739 "driver_specific": {} 00:06:01.739 } 00:06:01.739 ]' 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.739 [2024-07-25 09:54:40.807544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:01.739 [2024-07-25 09:54:40.807574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.739 [2024-07-25 09:54:40.807585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x893a90 00:06:01.739 [2024-07-25 09:54:40.807592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.739 [2024-07-25 09:54:40.808809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.739 [2024-07-25 09:54:40.808829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.739 Passthru0 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.739 { 00:06:01.739 "name": "Malloc2", 00:06:01.739 "aliases": [ 00:06:01.739 "8d0809f4-0f69-4192-9311-ef3588ecd4c8" 00:06:01.739 ], 00:06:01.739 "product_name": "Malloc disk", 00:06:01.739 "block_size": 512, 00:06:01.739 "num_blocks": 16384, 00:06:01.739 "uuid": "8d0809f4-0f69-4192-9311-ef3588ecd4c8", 00:06:01.739 "assigned_rate_limits": { 00:06:01.739 "rw_ios_per_sec": 0, 00:06:01.739 "rw_mbytes_per_sec": 0, 00:06:01.739 "r_mbytes_per_sec": 0, 00:06:01.739 "w_mbytes_per_sec": 0 00:06:01.739 }, 00:06:01.739 "claimed": true, 00:06:01.739 "claim_type": "exclusive_write", 00:06:01.739 "zoned": false, 00:06:01.739 "supported_io_types": { 00:06:01.739 "read": true, 00:06:01.739 "write": true, 00:06:01.739 "unmap": true, 00:06:01.739 "flush": true, 00:06:01.739 "reset": true, 00:06:01.739 "nvme_admin": false, 00:06:01.739 "nvme_io": false, 00:06:01.739 "nvme_io_md": false, 00:06:01.739 "write_zeroes": true, 00:06:01.739 "zcopy": true, 00:06:01.739 "get_zone_info": false, 00:06:01.739 "zone_management": false, 00:06:01.739 "zone_append": false, 00:06:01.739 "compare": false, 00:06:01.739 "compare_and_write": false, 00:06:01.739 "abort": true, 00:06:01.739 "seek_hole": false, 00:06:01.739 "seek_data": false, 00:06:01.739 "copy": true, 00:06:01.739 "nvme_iov_md": false 00:06:01.739 }, 00:06:01.739 "memory_domains": [ 00:06:01.739 { 00:06:01.739 "dma_device_id": "system", 00:06:01.739 "dma_device_type": 1 00:06:01.739 }, 00:06:01.739 { 00:06:01.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.739 "dma_device_type": 2 00:06:01.739 } 00:06:01.739 ], 00:06:01.739 "driver_specific": {} 00:06:01.739 }, 00:06:01.739 { 00:06:01.739 "name": "Passthru0", 00:06:01.739 "aliases": [ 00:06:01.739 "bfc5d7c2-33f8-5174-8a0e-2168db658da5" 00:06:01.739 ], 00:06:01.739 "product_name": "passthru", 00:06:01.739 "block_size": 512, 00:06:01.739 "num_blocks": 16384, 00:06:01.739 "uuid": "bfc5d7c2-33f8-5174-8a0e-2168db658da5", 00:06:01.739 "assigned_rate_limits": { 00:06:01.739 "rw_ios_per_sec": 0, 00:06:01.739 "rw_mbytes_per_sec": 0, 00:06:01.739 "r_mbytes_per_sec": 0, 00:06:01.739 "w_mbytes_per_sec": 0 00:06:01.739 }, 00:06:01.739 "claimed": false, 00:06:01.739 "zoned": false, 00:06:01.739 "supported_io_types": { 00:06:01.739 "read": true, 00:06:01.739 "write": true, 00:06:01.739 "unmap": true, 00:06:01.739 "flush": true, 00:06:01.739 "reset": true, 00:06:01.739 "nvme_admin": false, 00:06:01.739 "nvme_io": false, 00:06:01.739 "nvme_io_md": false, 00:06:01.739 "write_zeroes": true, 00:06:01.739 "zcopy": true, 00:06:01.739 "get_zone_info": false, 00:06:01.739 "zone_management": false, 00:06:01.739 "zone_append": false, 00:06:01.739 "compare": false, 00:06:01.739 "compare_and_write": false, 00:06:01.739 "abort": true, 00:06:01.739 "seek_hole": false, 00:06:01.739 "seek_data": false, 00:06:01.739 "copy": true, 00:06:01.739 "nvme_iov_md": false 00:06:01.739 }, 00:06:01.739 "memory_domains": [ 00:06:01.739 { 00:06:01.739 "dma_device_id": "system", 00:06:01.739 "dma_device_type": 1 00:06:01.739 }, 00:06:01.739 { 00:06:01.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.739 "dma_device_type": 2 00:06:01.739 } 00:06:01.739 ], 00:06:01.739 "driver_specific": { 00:06:01.739 "passthru": { 00:06:01.739 "name": "Passthru0", 00:06:01.739 "base_bdev_name": "Malloc2" 00:06:01.739 } 00:06:01.739 } 00:06:01.739 } 00:06:01.739 ]' 00:06:01.739 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.000 00:06:02.000 real 0m0.263s 00:06:02.000 user 0m0.164s 00:06:02.000 sys 0m0.036s 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.000 09:54:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:02.000 ************************************ 00:06:02.000 END TEST rpc_daemon_integrity 00:06:02.000 ************************************ 00:06:02.000 09:54:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:02.000 09:54:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1075845 00:06:02.000 09:54:40 rpc -- common/autotest_common.sh@950 -- # '[' -z 1075845 ']' 00:06:02.000 09:54:40 rpc -- common/autotest_common.sh@954 -- # kill -0 1075845 00:06:02.000 09:54:40 rpc -- common/autotest_common.sh@955 -- # uname 00:06:02.000 09:54:40 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.000 09:54:40 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1075845 00:06:02.000 09:54:41 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.000 09:54:41 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.000 09:54:41 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1075845' 00:06:02.000 killing process with pid 1075845 00:06:02.000 09:54:41 rpc -- common/autotest_common.sh@969 -- # kill 1075845 00:06:02.000 09:54:41 rpc -- common/autotest_common.sh@974 -- # wait 1075845 00:06:02.261 00:06:02.261 real 0m2.323s 00:06:02.261 user 0m3.028s 00:06:02.261 sys 0m0.632s 00:06:02.261 09:54:41 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:02.261 09:54:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.261 ************************************ 00:06:02.261 END TEST rpc 00:06:02.261 ************************************ 00:06:02.261 09:54:41 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:02.261 09:54:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.261 09:54:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.261 09:54:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.261 ************************************ 00:06:02.261 START TEST skip_rpc 00:06:02.261 ************************************ 00:06:02.261 09:54:41 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:02.522 * Looking for test storage... 00:06:02.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:02.522 09:54:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:02.522 09:54:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:02.522 09:54:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:02.522 09:54:41 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:02.522 09:54:41 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:02.522 09:54:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.522 ************************************ 00:06:02.522 START TEST skip_rpc 00:06:02.522 ************************************ 00:06:02.522 09:54:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:02.522 09:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1076363 00:06:02.522 09:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.522 09:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:02.522 09:54:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:02.522 [2024-07-25 09:54:41.508426] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:02.522 [2024-07-25 09:54:41.508485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1076363 ] 00:06:02.522 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.522 [2024-07-25 09:54:41.571523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.522 [2024-07-25 09:54:41.644952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1076363 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1076363 ']' 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1076363 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1076363 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1076363' 00:06:07.814 killing process with pid 1076363 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1076363 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1076363 00:06:07.814 00:06:07.814 real 0m5.279s 00:06:07.814 user 0m5.071s 00:06:07.814 sys 0m0.243s 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.814 09:54:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.814 ************************************ 00:06:07.814 END TEST skip_rpc 00:06:07.814 ************************************ 00:06:07.814 09:54:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:07.814 09:54:46 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.814 09:54:46 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.814 09:54:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.814 ************************************ 00:06:07.814 START TEST skip_rpc_with_json 00:06:07.814 ************************************ 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1077420 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1077420 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1077420 ']' 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.814 09:54:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:07.814 [2024-07-25 09:54:46.869768] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:07.814 [2024-07-25 09:54:46.869827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1077420 ] 00:06:07.814 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.814 [2024-07-25 09:54:46.932037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.075 [2024-07-25 09:54:47.006464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.647 [2024-07-25 09:54:47.626093] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:08.647 request: 00:06:08.647 { 00:06:08.647 "trtype": "tcp", 00:06:08.647 "method": "nvmf_get_transports", 00:06:08.647 "req_id": 1 00:06:08.647 } 00:06:08.647 Got JSON-RPC error response 00:06:08.647 response: 00:06:08.647 { 00:06:08.647 "code": -19, 00:06:08.647 "message": "No such device" 00:06:08.647 } 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.647 [2024-07-25 09:54:47.638219] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:08.647 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.909 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:08.909 09:54:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:08.909 { 00:06:08.909 "subsystems": [ 00:06:08.909 { 00:06:08.909 "subsystem": "vfio_user_target", 00:06:08.909 "config": null 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "keyring", 00:06:08.909 "config": [] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "iobuf", 00:06:08.909 "config": [ 00:06:08.909 { 00:06:08.909 "method": "iobuf_set_options", 00:06:08.909 "params": { 00:06:08.909 "small_pool_count": 8192, 00:06:08.909 "large_pool_count": 1024, 00:06:08.909 "small_bufsize": 8192, 00:06:08.909 "large_bufsize": 135168 00:06:08.909 } 00:06:08.909 } 00:06:08.909 ] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "sock", 00:06:08.909 "config": [ 00:06:08.909 { 00:06:08.909 "method": "sock_set_default_impl", 00:06:08.909 "params": { 00:06:08.909 "impl_name": "posix" 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "sock_impl_set_options", 00:06:08.909 "params": { 00:06:08.909 "impl_name": "ssl", 00:06:08.909 "recv_buf_size": 4096, 00:06:08.909 "send_buf_size": 4096, 00:06:08.909 "enable_recv_pipe": true, 00:06:08.909 "enable_quickack": false, 00:06:08.909 "enable_placement_id": 0, 00:06:08.909 "enable_zerocopy_send_server": true, 00:06:08.909 "enable_zerocopy_send_client": false, 00:06:08.909 "zerocopy_threshold": 0, 00:06:08.909 "tls_version": 0, 00:06:08.909 "enable_ktls": false 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "sock_impl_set_options", 00:06:08.909 "params": { 00:06:08.909 "impl_name": "posix", 00:06:08.909 "recv_buf_size": 2097152, 00:06:08.909 "send_buf_size": 2097152, 00:06:08.909 "enable_recv_pipe": true, 00:06:08.909 "enable_quickack": false, 00:06:08.909 "enable_placement_id": 0, 00:06:08.909 "enable_zerocopy_send_server": true, 00:06:08.909 "enable_zerocopy_send_client": false, 00:06:08.909 "zerocopy_threshold": 0, 00:06:08.909 "tls_version": 0, 00:06:08.909 "enable_ktls": false 00:06:08.909 } 00:06:08.909 } 00:06:08.909 ] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "vmd", 00:06:08.909 "config": [] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "accel", 00:06:08.909 "config": [ 00:06:08.909 { 00:06:08.909 "method": "accel_set_options", 00:06:08.909 "params": { 00:06:08.909 "small_cache_size": 128, 00:06:08.909 "large_cache_size": 16, 00:06:08.909 "task_count": 2048, 00:06:08.909 "sequence_count": 2048, 00:06:08.909 "buf_count": 2048 00:06:08.909 } 00:06:08.909 } 00:06:08.909 ] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "bdev", 00:06:08.909 "config": [ 00:06:08.909 { 00:06:08.909 "method": "bdev_set_options", 00:06:08.909 "params": { 00:06:08.909 "bdev_io_pool_size": 65535, 00:06:08.909 "bdev_io_cache_size": 256, 00:06:08.909 "bdev_auto_examine": true, 00:06:08.909 "iobuf_small_cache_size": 128, 00:06:08.909 "iobuf_large_cache_size": 16 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "bdev_raid_set_options", 00:06:08.909 "params": { 00:06:08.909 "process_window_size_kb": 1024, 00:06:08.909 "process_max_bandwidth_mb_sec": 0 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "bdev_iscsi_set_options", 00:06:08.909 "params": { 00:06:08.909 "timeout_sec": 30 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "bdev_nvme_set_options", 00:06:08.909 "params": { 00:06:08.909 "action_on_timeout": "none", 00:06:08.909 "timeout_us": 0, 00:06:08.909 "timeout_admin_us": 0, 00:06:08.909 "keep_alive_timeout_ms": 10000, 00:06:08.909 "arbitration_burst": 0, 00:06:08.909 "low_priority_weight": 0, 00:06:08.909 "medium_priority_weight": 0, 00:06:08.909 "high_priority_weight": 0, 00:06:08.909 "nvme_adminq_poll_period_us": 10000, 00:06:08.909 "nvme_ioq_poll_period_us": 0, 00:06:08.909 "io_queue_requests": 0, 00:06:08.909 "delay_cmd_submit": true, 00:06:08.909 "transport_retry_count": 4, 00:06:08.909 "bdev_retry_count": 3, 00:06:08.909 "transport_ack_timeout": 0, 00:06:08.909 "ctrlr_loss_timeout_sec": 0, 00:06:08.909 "reconnect_delay_sec": 0, 00:06:08.909 "fast_io_fail_timeout_sec": 0, 00:06:08.909 "disable_auto_failback": false, 00:06:08.909 "generate_uuids": false, 00:06:08.909 "transport_tos": 0, 00:06:08.909 "nvme_error_stat": false, 00:06:08.909 "rdma_srq_size": 0, 00:06:08.909 "io_path_stat": false, 00:06:08.909 "allow_accel_sequence": false, 00:06:08.909 "rdma_max_cq_size": 0, 00:06:08.909 "rdma_cm_event_timeout_ms": 0, 00:06:08.909 "dhchap_digests": [ 00:06:08.909 "sha256", 00:06:08.909 "sha384", 00:06:08.909 "sha512" 00:06:08.909 ], 00:06:08.909 "dhchap_dhgroups": [ 00:06:08.909 "null", 00:06:08.909 "ffdhe2048", 00:06:08.909 "ffdhe3072", 00:06:08.909 "ffdhe4096", 00:06:08.909 "ffdhe6144", 00:06:08.909 "ffdhe8192" 00:06:08.909 ] 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "bdev_nvme_set_hotplug", 00:06:08.909 "params": { 00:06:08.909 "period_us": 100000, 00:06:08.909 "enable": false 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "bdev_wait_for_examine" 00:06:08.909 } 00:06:08.909 ] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "scsi", 00:06:08.909 "config": null 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "scheduler", 00:06:08.909 "config": [ 00:06:08.909 { 00:06:08.909 "method": "framework_set_scheduler", 00:06:08.909 "params": { 00:06:08.909 "name": "static" 00:06:08.909 } 00:06:08.909 } 00:06:08.909 ] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "vhost_scsi", 00:06:08.909 "config": [] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "vhost_blk", 00:06:08.909 "config": [] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "ublk", 00:06:08.909 "config": [] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "nbd", 00:06:08.909 "config": [] 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "subsystem": "nvmf", 00:06:08.909 "config": [ 00:06:08.909 { 00:06:08.909 "method": "nvmf_set_config", 00:06:08.909 "params": { 00:06:08.909 "discovery_filter": "match_any", 00:06:08.909 "admin_cmd_passthru": { 00:06:08.909 "identify_ctrlr": false 00:06:08.909 } 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "nvmf_set_max_subsystems", 00:06:08.909 "params": { 00:06:08.909 "max_subsystems": 1024 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "nvmf_set_crdt", 00:06:08.909 "params": { 00:06:08.909 "crdt1": 0, 00:06:08.909 "crdt2": 0, 00:06:08.909 "crdt3": 0 00:06:08.909 } 00:06:08.909 }, 00:06:08.909 { 00:06:08.909 "method": "nvmf_create_transport", 00:06:08.909 "params": { 00:06:08.909 "trtype": "TCP", 00:06:08.909 "max_queue_depth": 128, 00:06:08.909 "max_io_qpairs_per_ctrlr": 127, 00:06:08.909 "in_capsule_data_size": 4096, 00:06:08.909 "max_io_size": 131072, 00:06:08.909 "io_unit_size": 131072, 00:06:08.909 "max_aq_depth": 128, 00:06:08.909 "num_shared_buffers": 511, 00:06:08.909 "buf_cache_size": 4294967295, 00:06:08.909 "dif_insert_or_strip": false, 00:06:08.909 "zcopy": false, 00:06:08.910 "c2h_success": true, 00:06:08.910 "sock_priority": 0, 00:06:08.910 "abort_timeout_sec": 1, 00:06:08.910 "ack_timeout": 0, 00:06:08.910 "data_wr_pool_size": 0 00:06:08.910 } 00:06:08.910 } 00:06:08.910 ] 00:06:08.910 }, 00:06:08.910 { 00:06:08.910 "subsystem": "iscsi", 00:06:08.910 "config": [ 00:06:08.910 { 00:06:08.910 "method": "iscsi_set_options", 00:06:08.910 "params": { 00:06:08.910 "node_base": "iqn.2016-06.io.spdk", 00:06:08.910 "max_sessions": 128, 00:06:08.910 "max_connections_per_session": 2, 00:06:08.910 "max_queue_depth": 64, 00:06:08.910 "default_time2wait": 2, 00:06:08.910 "default_time2retain": 20, 00:06:08.910 "first_burst_length": 8192, 00:06:08.910 "immediate_data": true, 00:06:08.910 "allow_duplicated_isid": false, 00:06:08.910 "error_recovery_level": 0, 00:06:08.910 "nop_timeout": 60, 00:06:08.910 "nop_in_interval": 30, 00:06:08.910 "disable_chap": false, 00:06:08.910 "require_chap": false, 00:06:08.910 "mutual_chap": false, 00:06:08.910 "chap_group": 0, 00:06:08.910 "max_large_datain_per_connection": 64, 00:06:08.910 "max_r2t_per_connection": 4, 00:06:08.910 "pdu_pool_size": 36864, 00:06:08.910 "immediate_data_pool_size": 16384, 00:06:08.910 "data_out_pool_size": 2048 00:06:08.910 } 00:06:08.910 } 00:06:08.910 ] 00:06:08.910 } 00:06:08.910 ] 00:06:08.910 } 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1077420 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1077420 ']' 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1077420 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1077420 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1077420' 00:06:08.910 killing process with pid 1077420 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1077420 00:06:08.910 09:54:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1077420 00:06:09.171 09:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1077739 00:06:09.171 09:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:09.171 09:54:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1077739 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1077739 ']' 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1077739 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1077739 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1077739' 00:06:14.463 killing process with pid 1077739 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1077739 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1077739 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:14.463 00:06:14.463 real 0m6.542s 00:06:14.463 user 0m6.442s 00:06:14.463 sys 0m0.509s 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:14.463 ************************************ 00:06:14.463 END TEST skip_rpc_with_json 00:06:14.463 ************************************ 00:06:14.463 09:54:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:14.463 09:54:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.463 09:54:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.463 09:54:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.463 ************************************ 00:06:14.463 START TEST skip_rpc_with_delay 00:06:14.463 ************************************ 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:14.463 [2024-07-25 09:54:53.497433] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:14.463 [2024-07-25 09:54:53.497508] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.463 00:06:14.463 real 0m0.085s 00:06:14.463 user 0m0.057s 00:06:14.463 sys 0m0.027s 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.463 09:54:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:14.463 ************************************ 00:06:14.463 END TEST skip_rpc_with_delay 00:06:14.463 ************************************ 00:06:14.463 09:54:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:14.463 09:54:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:14.463 09:54:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:14.463 09:54:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.463 09:54:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.463 09:54:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.463 ************************************ 00:06:14.463 START TEST exit_on_failed_rpc_init 00:06:14.463 ************************************ 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1078900 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1078900 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1078900 ']' 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:14.463 09:54:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:14.724 [2024-07-25 09:54:53.648233] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.724 [2024-07-25 09:54:53.648292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1078900 ] 00:06:14.724 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.724 [2024-07-25 09:54:53.711441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.724 [2024-07-25 09:54:53.786931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:15.297 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.558 [2024-07-25 09:54:54.469506] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.558 [2024-07-25 09:54:54.469558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079141 ] 00:06:15.558 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.558 [2024-07-25 09:54:54.542627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.558 [2024-07-25 09:54:54.606673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.558 [2024-07-25 09:54:54.606737] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:15.558 [2024-07-25 09:54:54.606746] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:15.558 [2024-07-25 09:54:54.606753] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1078900 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1078900 ']' 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1078900 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.558 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1078900 00:06:15.819 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.819 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.819 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1078900' 00:06:15.819 killing process with pid 1078900 00:06:15.819 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1078900 00:06:15.819 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1078900 00:06:15.819 00:06:15.819 real 0m1.341s 00:06:15.819 user 0m1.577s 00:06:15.819 sys 0m0.365s 00:06:15.820 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.820 09:54:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:15.820 ************************************ 00:06:15.820 END TEST exit_on_failed_rpc_init 00:06:15.820 ************************************ 00:06:16.081 09:54:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:16.081 00:06:16.081 real 0m13.662s 00:06:16.081 user 0m13.305s 00:06:16.081 sys 0m1.419s 00:06:16.082 09:54:54 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.082 09:54:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.082 ************************************ 00:06:16.082 END TEST skip_rpc 00:06:16.082 ************************************ 00:06:16.082 09:54:55 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.082 09:54:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.082 09:54:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.082 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.082 ************************************ 00:06:16.082 START TEST rpc_client 00:06:16.082 ************************************ 00:06:16.082 09:54:55 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:16.082 * Looking for test storage... 00:06:16.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:16.082 09:54:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:16.082 OK 00:06:16.082 09:54:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.082 00:06:16.082 real 0m0.132s 00:06:16.082 user 0m0.061s 00:06:16.082 sys 0m0.080s 00:06:16.082 09:54:55 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.082 09:54:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.082 ************************************ 00:06:16.082 END TEST rpc_client 00:06:16.082 ************************************ 00:06:16.343 09:54:55 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.343 09:54:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.343 09:54:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.343 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.343 ************************************ 00:06:16.343 START TEST json_config 00:06:16.343 ************************************ 00:06:16.343 09:54:55 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:16.343 09:54:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.343 09:54:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:16.343 09:54:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.344 09:54:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.344 09:54:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.344 09:54:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.344 09:54:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.344 09:54:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.344 09:54:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.344 09:54:55 json_config -- paths/export.sh@5 -- # export PATH 00:06:16.344 09:54:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@47 -- # : 0 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.344 09:54:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:06:16.344 INFO: JSON configuration test init 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.344 09:54:55 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.344 09:54:55 json_config -- json_config/common.sh@9 -- # local app=target 00:06:16.344 09:54:55 json_config -- json_config/common.sh@10 -- # shift 00:06:16.344 09:54:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.344 09:54:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.344 09:54:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.344 09:54:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.344 09:54:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.344 09:54:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1079429 00:06:16.344 09:54:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.344 Waiting for target to run... 00:06:16.344 09:54:55 json_config -- json_config/common.sh@25 -- # waitforlisten 1079429 /var/tmp/spdk_tgt.sock 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@831 -- # '[' -z 1079429 ']' 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.344 09:54:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:16.344 09:54:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.344 [2024-07-25 09:54:55.447278] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.344 [2024-07-25 09:54:55.447350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1079429 ] 00:06:16.344 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.606 [2024-07-25 09:54:55.728137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.867 [2024-07-25 09:54:55.780180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.127 09:54:56 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.127 09:54:56 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:17.127 09:54:56 json_config -- json_config/common.sh@26 -- # echo '' 00:06:17.127 00:06:17.127 09:54:56 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:06:17.127 09:54:56 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:06:17.127 09:54:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.127 09:54:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.127 09:54:56 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:06:17.127 09:54:56 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:06:17.127 09:54:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.127 09:54:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.127 09:54:56 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:17.442 09:54:56 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:06:17.443 09:54:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:17.704 09:54:56 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.704 09:54:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:17.704 09:54:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:17.704 09:54:56 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:17.965 09:54:56 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:17.965 09:54:56 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:17.965 09:54:56 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:06:17.965 09:54:56 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:06:17.966 09:54:56 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:06:17.966 09:54:56 json_config -- json_config/json_config.sh@51 -- # sort 00:06:17.966 09:54:56 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:06:17.966 09:54:56 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:06:17.966 09:54:56 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:06:17.966 09:54:56 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:06:17.966 09:54:56 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:17.966 09:54:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@59 -- # return 0 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:06:17.966 09:54:57 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:17.966 09:54:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:06:17.966 09:54:57 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:17.966 09:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.227 MallocForNvmf0 00:06:18.227 09:54:57 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.227 09:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.227 MallocForNvmf1 00:06:18.488 09:54:57 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.488 09:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:18.488 [2024-07-25 09:54:57.523098] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.488 09:54:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.488 09:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:18.749 09:54:57 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:18.749 09:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:19.010 09:54:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.010 09:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.010 09:54:58 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:19.010 09:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:19.271 [2024-07-25 09:54:58.177213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:19.271 09:54:58 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:06:19.271 09:54:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.271 09:54:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.271 09:54:58 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:06:19.271 09:54:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.271 09:54:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.271 09:54:58 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:06:19.271 09:54:58 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.271 09:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:19.532 MallocBdevForConfigChangeCheck 00:06:19.532 09:54:58 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:06:19.532 09:54:58 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.532 09:54:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.532 09:54:58 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:06:19.532 09:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:19.793 09:54:58 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:06:19.793 INFO: shutting down applications... 00:06:19.793 09:54:58 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:06:19.793 09:54:58 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:06:19.793 09:54:58 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:06:19.793 09:54:58 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:20.054 Calling clear_iscsi_subsystem 00:06:20.054 Calling clear_nvmf_subsystem 00:06:20.054 Calling clear_nbd_subsystem 00:06:20.054 Calling clear_ublk_subsystem 00:06:20.054 Calling clear_vhost_blk_subsystem 00:06:20.054 Calling clear_vhost_scsi_subsystem 00:06:20.054 Calling clear_bdev_subsystem 00:06:20.316 09:54:59 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:20.316 09:54:59 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:20.316 09:54:59 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:20.316 09:54:59 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:20.316 09:54:59 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:20.316 09:54:59 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:20.577 09:54:59 json_config -- json_config/json_config.sh@349 -- # break 00:06:20.577 09:54:59 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:20.577 09:54:59 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:20.577 09:54:59 json_config -- json_config/common.sh@31 -- # local app=target 00:06:20.577 09:54:59 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.577 09:54:59 json_config -- json_config/common.sh@35 -- # [[ -n 1079429 ]] 00:06:20.577 09:54:59 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1079429 00:06:20.577 09:54:59 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.577 09:54:59 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.577 09:54:59 json_config -- json_config/common.sh@41 -- # kill -0 1079429 00:06:20.577 09:54:59 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.150 09:55:00 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.150 09:55:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.150 09:55:00 json_config -- json_config/common.sh@41 -- # kill -0 1079429 00:06:21.150 09:55:00 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.150 09:55:00 json_config -- json_config/common.sh@43 -- # break 00:06:21.150 09:55:00 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.150 09:55:00 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.150 SPDK target shutdown done 00:06:21.150 09:55:00 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:21.150 INFO: relaunching applications... 00:06:21.150 09:55:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.150 09:55:00 json_config -- json_config/common.sh@9 -- # local app=target 00:06:21.150 09:55:00 json_config -- json_config/common.sh@10 -- # shift 00:06:21.150 09:55:00 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:21.150 09:55:00 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:21.150 09:55:00 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:21.150 09:55:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.150 09:55:00 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:21.150 09:55:00 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1080397 00:06:21.150 09:55:00 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:21.150 Waiting for target to run... 00:06:21.150 09:55:00 json_config -- json_config/common.sh@25 -- # waitforlisten 1080397 /var/tmp/spdk_tgt.sock 00:06:21.150 09:55:00 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.150 09:55:00 json_config -- common/autotest_common.sh@831 -- # '[' -z 1080397 ']' 00:06:21.150 09:55:00 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:21.150 09:55:00 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.150 09:55:00 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:21.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:21.150 09:55:00 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.150 09:55:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:21.150 [2024-07-25 09:55:00.069253] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:21.150 [2024-07-25 09:55:00.069310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1080397 ] 00:06:21.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.412 [2024-07-25 09:55:00.465185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.412 [2024-07-25 09:55:00.530472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.983 [2024-07-25 09:55:01.025122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.983 [2024-07-25 09:55:01.057491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:21.983 09:55:01 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.984 09:55:01 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:21.984 09:55:01 json_config -- json_config/common.sh@26 -- # echo '' 00:06:21.984 00:06:21.984 09:55:01 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:21.984 09:55:01 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:21.984 INFO: Checking if target configuration is the same... 00:06:21.984 09:55:01 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:21.984 09:55:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.984 09:55:01 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:21.984 + '[' 2 -ne 2 ']' 00:06:21.984 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:21.984 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:21.984 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.984 +++ basename /dev/fd/62 00:06:22.244 ++ mktemp /tmp/62.XXX 00:06:22.244 + tmp_file_1=/tmp/62.N5w 00:06:22.244 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.244 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.244 + tmp_file_2=/tmp/spdk_tgt_config.json.wfc 00:06:22.244 + ret=0 00:06:22.244 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.505 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:22.506 + diff -u /tmp/62.N5w /tmp/spdk_tgt_config.json.wfc 00:06:22.506 + echo 'INFO: JSON config files are the same' 00:06:22.506 INFO: JSON config files are the same 00:06:22.506 + rm /tmp/62.N5w /tmp/spdk_tgt_config.json.wfc 00:06:22.506 + exit 0 00:06:22.506 09:55:01 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:22.506 09:55:01 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:22.506 INFO: changing configuration and checking if this can be detected... 00:06:22.506 09:55:01 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.506 09:55:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:22.506 09:55:01 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:22.506 09:55:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:22.506 09:55:01 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.506 + '[' 2 -ne 2 ']' 00:06:22.506 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:22.506 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:22.506 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.506 +++ basename /dev/fd/62 00:06:22.767 ++ mktemp /tmp/62.XXX 00:06:22.767 + tmp_file_1=/tmp/62.f7l 00:06:22.767 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:22.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:22.767 + tmp_file_2=/tmp/spdk_tgt_config.json.hdy 00:06:22.767 + ret=0 00:06:22.767 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.028 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:23.028 + diff -u /tmp/62.f7l /tmp/spdk_tgt_config.json.hdy 00:06:23.028 + ret=1 00:06:23.028 + echo '=== Start of file: /tmp/62.f7l ===' 00:06:23.028 + cat /tmp/62.f7l 00:06:23.028 + echo '=== End of file: /tmp/62.f7l ===' 00:06:23.028 + echo '' 00:06:23.028 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hdy ===' 00:06:23.028 + cat /tmp/spdk_tgt_config.json.hdy 00:06:23.028 + echo '=== End of file: /tmp/spdk_tgt_config.json.hdy ===' 00:06:23.028 + echo '' 00:06:23.028 + rm /tmp/62.f7l /tmp/spdk_tgt_config.json.hdy 00:06:23.028 + exit 1 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:23.028 INFO: configuration change detected. 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:23.028 09:55:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.028 09:55:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@321 -- # [[ -n 1080397 ]] 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:23.028 09:55:01 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:23.028 09:55:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:23.028 09:55:01 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:23.028 09:55:02 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:23.028 09:55:02 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:23.028 09:55:02 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:23.028 09:55:02 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:23.028 09:55:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.028 09:55:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.028 09:55:02 json_config -- json_config/json_config.sh@327 -- # killprocess 1080397 00:06:23.028 09:55:02 json_config -- common/autotest_common.sh@950 -- # '[' -z 1080397 ']' 00:06:23.028 09:55:02 json_config -- common/autotest_common.sh@954 -- # kill -0 1080397 00:06:23.028 09:55:02 json_config -- common/autotest_common.sh@955 -- # uname 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1080397 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1080397' 00:06:23.029 killing process with pid 1080397 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@969 -- # kill 1080397 00:06:23.029 09:55:02 json_config -- common/autotest_common.sh@974 -- # wait 1080397 00:06:23.290 09:55:02 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:23.290 09:55:02 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:23.290 09:55:02 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:23.290 09:55:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.552 09:55:02 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:23.552 09:55:02 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:23.552 INFO: Success 00:06:23.552 00:06:23.552 real 0m7.163s 00:06:23.552 user 0m8.582s 00:06:23.552 sys 0m1.846s 00:06:23.552 09:55:02 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.552 09:55:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:23.552 ************************************ 00:06:23.552 END TEST json_config 00:06:23.552 ************************************ 00:06:23.552 09:55:02 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.552 09:55:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.552 09:55:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.552 09:55:02 -- common/autotest_common.sh@10 -- # set +x 00:06:23.552 ************************************ 00:06:23.552 START TEST json_config_extra_key 00:06:23.552 ************************************ 00:06:23.552 09:55:02 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:23.552 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:23.552 09:55:02 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.552 09:55:02 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.552 09:55:02 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.552 09:55:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.552 09:55:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.552 09:55:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.552 09:55:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:23.552 09:55:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.552 09:55:02 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.553 09:55:02 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:23.553 INFO: launching applications... 00:06:23.553 09:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1081169 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:23.553 Waiting for target to run... 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1081169 /var/tmp/spdk_tgt.sock 00:06:23.553 09:55:02 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1081169 ']' 00:06:23.553 09:55:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:23.553 09:55:02 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:23.553 09:55:02 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.553 09:55:02 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:23.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:23.553 09:55:02 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.553 09:55:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:23.553 [2024-07-25 09:55:02.666877] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:23.553 [2024-07-25 09:55:02.666985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081169 ] 00:06:23.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.076 [2024-07-25 09:55:02.970884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.076 [2024-07-25 09:55:03.029546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.337 09:55:03 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.337 09:55:03 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:24.337 00:06:24.337 09:55:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:24.337 INFO: shutting down applications... 00:06:24.337 09:55:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1081169 ]] 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1081169 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1081169 00:06:24.337 09:55:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1081169 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:24.909 09:55:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:24.909 SPDK target shutdown done 00:06:24.909 09:55:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:24.909 Success 00:06:24.909 00:06:24.909 real 0m1.436s 00:06:24.909 user 0m1.061s 00:06:24.909 sys 0m0.393s 00:06:24.909 09:55:03 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.909 09:55:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:24.909 ************************************ 00:06:24.909 END TEST json_config_extra_key 00:06:24.909 ************************************ 00:06:24.909 09:55:03 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:24.909 09:55:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.909 09:55:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.909 09:55:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.909 ************************************ 00:06:24.909 START TEST alias_rpc 00:06:24.909 ************************************ 00:06:24.909 09:55:04 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:25.170 * Looking for test storage... 00:06:25.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:25.170 09:55:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:25.170 09:55:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1081551 00:06:25.170 09:55:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1081551 00:06:25.170 09:55:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.170 09:55:04 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1081551 ']' 00:06:25.170 09:55:04 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.170 09:55:04 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.170 09:55:04 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.170 09:55:04 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.170 09:55:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.170 [2024-07-25 09:55:04.163597] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:25.170 [2024-07-25 09:55:04.163674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081551 ] 00:06:25.170 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.170 [2024-07-25 09:55:04.229944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.170 [2024-07-25 09:55:04.303934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.112 09:55:04 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.112 09:55:04 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:26.112 09:55:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:26.112 09:55:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1081551 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1081551 ']' 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1081551 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1081551 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1081551' 00:06:26.112 killing process with pid 1081551 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@969 -- # kill 1081551 00:06:26.112 09:55:05 alias_rpc -- common/autotest_common.sh@974 -- # wait 1081551 00:06:26.374 00:06:26.374 real 0m1.396s 00:06:26.374 user 0m1.546s 00:06:26.374 sys 0m0.382s 00:06:26.374 09:55:05 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.374 09:55:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.374 ************************************ 00:06:26.374 END TEST alias_rpc 00:06:26.374 ************************************ 00:06:26.374 09:55:05 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:26.374 09:55:05 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:26.374 09:55:05 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:26.374 09:55:05 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.374 09:55:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.374 ************************************ 00:06:26.374 START TEST spdkcli_tcp 00:06:26.374 ************************************ 00:06:26.374 09:55:05 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:26.634 * Looking for test storage... 00:06:26.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:26.634 09:55:05 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:26.634 09:55:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1081856 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1081856 00:06:26.634 09:55:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:26.634 09:55:05 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1081856 ']' 00:06:26.634 09:55:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.634 09:55:05 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.634 09:55:05 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.635 09:55:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.635 09:55:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.635 [2024-07-25 09:55:05.640705] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:26.635 [2024-07-25 09:55:05.640784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1081856 ] 00:06:26.635 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.635 [2024-07-25 09:55:05.705533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.895 [2024-07-25 09:55:05.781858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.895 [2024-07-25 09:55:05.781861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.467 09:55:06 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.467 09:55:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:27.467 09:55:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1081956 00:06:27.467 09:55:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:27.467 09:55:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:27.467 [ 00:06:27.467 "bdev_malloc_delete", 00:06:27.467 "bdev_malloc_create", 00:06:27.467 "bdev_null_resize", 00:06:27.467 "bdev_null_delete", 00:06:27.467 "bdev_null_create", 00:06:27.467 "bdev_nvme_cuse_unregister", 00:06:27.467 "bdev_nvme_cuse_register", 00:06:27.467 "bdev_opal_new_user", 00:06:27.467 "bdev_opal_set_lock_state", 00:06:27.467 "bdev_opal_delete", 00:06:27.467 "bdev_opal_get_info", 00:06:27.467 "bdev_opal_create", 00:06:27.467 "bdev_nvme_opal_revert", 00:06:27.467 "bdev_nvme_opal_init", 00:06:27.467 "bdev_nvme_send_cmd", 00:06:27.467 "bdev_nvme_get_path_iostat", 00:06:27.467 "bdev_nvme_get_mdns_discovery_info", 00:06:27.467 "bdev_nvme_stop_mdns_discovery", 00:06:27.467 "bdev_nvme_start_mdns_discovery", 00:06:27.467 "bdev_nvme_set_multipath_policy", 00:06:27.467 "bdev_nvme_set_preferred_path", 00:06:27.467 "bdev_nvme_get_io_paths", 00:06:27.467 "bdev_nvme_remove_error_injection", 00:06:27.467 "bdev_nvme_add_error_injection", 00:06:27.467 "bdev_nvme_get_discovery_info", 00:06:27.467 "bdev_nvme_stop_discovery", 00:06:27.467 "bdev_nvme_start_discovery", 00:06:27.467 "bdev_nvme_get_controller_health_info", 00:06:27.467 "bdev_nvme_disable_controller", 00:06:27.467 "bdev_nvme_enable_controller", 00:06:27.467 "bdev_nvme_reset_controller", 00:06:27.467 "bdev_nvme_get_transport_statistics", 00:06:27.467 "bdev_nvme_apply_firmware", 00:06:27.467 "bdev_nvme_detach_controller", 00:06:27.467 "bdev_nvme_get_controllers", 00:06:27.467 "bdev_nvme_attach_controller", 00:06:27.467 "bdev_nvme_set_hotplug", 00:06:27.467 "bdev_nvme_set_options", 00:06:27.467 "bdev_passthru_delete", 00:06:27.467 "bdev_passthru_create", 00:06:27.467 "bdev_lvol_set_parent_bdev", 00:06:27.467 "bdev_lvol_set_parent", 00:06:27.467 "bdev_lvol_check_shallow_copy", 00:06:27.467 "bdev_lvol_start_shallow_copy", 00:06:27.467 "bdev_lvol_grow_lvstore", 00:06:27.467 "bdev_lvol_get_lvols", 00:06:27.467 "bdev_lvol_get_lvstores", 00:06:27.467 "bdev_lvol_delete", 00:06:27.467 "bdev_lvol_set_read_only", 00:06:27.467 "bdev_lvol_resize", 00:06:27.467 "bdev_lvol_decouple_parent", 00:06:27.467 "bdev_lvol_inflate", 00:06:27.467 "bdev_lvol_rename", 00:06:27.467 "bdev_lvol_clone_bdev", 00:06:27.467 "bdev_lvol_clone", 00:06:27.467 "bdev_lvol_snapshot", 00:06:27.467 "bdev_lvol_create", 00:06:27.467 "bdev_lvol_delete_lvstore", 00:06:27.467 "bdev_lvol_rename_lvstore", 00:06:27.467 "bdev_lvol_create_lvstore", 00:06:27.467 "bdev_raid_set_options", 00:06:27.467 "bdev_raid_remove_base_bdev", 00:06:27.467 "bdev_raid_add_base_bdev", 00:06:27.467 "bdev_raid_delete", 00:06:27.467 "bdev_raid_create", 00:06:27.467 "bdev_raid_get_bdevs", 00:06:27.467 "bdev_error_inject_error", 00:06:27.467 "bdev_error_delete", 00:06:27.467 "bdev_error_create", 00:06:27.467 "bdev_split_delete", 00:06:27.467 "bdev_split_create", 00:06:27.467 "bdev_delay_delete", 00:06:27.467 "bdev_delay_create", 00:06:27.467 "bdev_delay_update_latency", 00:06:27.467 "bdev_zone_block_delete", 00:06:27.467 "bdev_zone_block_create", 00:06:27.467 "blobfs_create", 00:06:27.467 "blobfs_detect", 00:06:27.467 "blobfs_set_cache_size", 00:06:27.467 "bdev_aio_delete", 00:06:27.467 "bdev_aio_rescan", 00:06:27.467 "bdev_aio_create", 00:06:27.467 "bdev_ftl_set_property", 00:06:27.467 "bdev_ftl_get_properties", 00:06:27.467 "bdev_ftl_get_stats", 00:06:27.467 "bdev_ftl_unmap", 00:06:27.467 "bdev_ftl_unload", 00:06:27.467 "bdev_ftl_delete", 00:06:27.467 "bdev_ftl_load", 00:06:27.467 "bdev_ftl_create", 00:06:27.467 "bdev_virtio_attach_controller", 00:06:27.467 "bdev_virtio_scsi_get_devices", 00:06:27.467 "bdev_virtio_detach_controller", 00:06:27.467 "bdev_virtio_blk_set_hotplug", 00:06:27.467 "bdev_iscsi_delete", 00:06:27.467 "bdev_iscsi_create", 00:06:27.467 "bdev_iscsi_set_options", 00:06:27.467 "accel_error_inject_error", 00:06:27.467 "ioat_scan_accel_module", 00:06:27.467 "dsa_scan_accel_module", 00:06:27.467 "iaa_scan_accel_module", 00:06:27.467 "vfu_virtio_create_scsi_endpoint", 00:06:27.467 "vfu_virtio_scsi_remove_target", 00:06:27.467 "vfu_virtio_scsi_add_target", 00:06:27.467 "vfu_virtio_create_blk_endpoint", 00:06:27.467 "vfu_virtio_delete_endpoint", 00:06:27.467 "keyring_file_remove_key", 00:06:27.467 "keyring_file_add_key", 00:06:27.467 "keyring_linux_set_options", 00:06:27.467 "iscsi_get_histogram", 00:06:27.467 "iscsi_enable_histogram", 00:06:27.467 "iscsi_set_options", 00:06:27.467 "iscsi_get_auth_groups", 00:06:27.467 "iscsi_auth_group_remove_secret", 00:06:27.467 "iscsi_auth_group_add_secret", 00:06:27.467 "iscsi_delete_auth_group", 00:06:27.467 "iscsi_create_auth_group", 00:06:27.467 "iscsi_set_discovery_auth", 00:06:27.467 "iscsi_get_options", 00:06:27.467 "iscsi_target_node_request_logout", 00:06:27.467 "iscsi_target_node_set_redirect", 00:06:27.467 "iscsi_target_node_set_auth", 00:06:27.467 "iscsi_target_node_add_lun", 00:06:27.467 "iscsi_get_stats", 00:06:27.467 "iscsi_get_connections", 00:06:27.467 "iscsi_portal_group_set_auth", 00:06:27.467 "iscsi_start_portal_group", 00:06:27.467 "iscsi_delete_portal_group", 00:06:27.467 "iscsi_create_portal_group", 00:06:27.467 "iscsi_get_portal_groups", 00:06:27.467 "iscsi_delete_target_node", 00:06:27.467 "iscsi_target_node_remove_pg_ig_maps", 00:06:27.467 "iscsi_target_node_add_pg_ig_maps", 00:06:27.467 "iscsi_create_target_node", 00:06:27.467 "iscsi_get_target_nodes", 00:06:27.467 "iscsi_delete_initiator_group", 00:06:27.467 "iscsi_initiator_group_remove_initiators", 00:06:27.467 "iscsi_initiator_group_add_initiators", 00:06:27.467 "iscsi_create_initiator_group", 00:06:27.467 "iscsi_get_initiator_groups", 00:06:27.467 "nvmf_set_crdt", 00:06:27.467 "nvmf_set_config", 00:06:27.467 "nvmf_set_max_subsystems", 00:06:27.467 "nvmf_stop_mdns_prr", 00:06:27.467 "nvmf_publish_mdns_prr", 00:06:27.467 "nvmf_subsystem_get_listeners", 00:06:27.467 "nvmf_subsystem_get_qpairs", 00:06:27.467 "nvmf_subsystem_get_controllers", 00:06:27.467 "nvmf_get_stats", 00:06:27.467 "nvmf_get_transports", 00:06:27.467 "nvmf_create_transport", 00:06:27.467 "nvmf_get_targets", 00:06:27.467 "nvmf_delete_target", 00:06:27.467 "nvmf_create_target", 00:06:27.467 "nvmf_subsystem_allow_any_host", 00:06:27.467 "nvmf_subsystem_remove_host", 00:06:27.467 "nvmf_subsystem_add_host", 00:06:27.467 "nvmf_ns_remove_host", 00:06:27.467 "nvmf_ns_add_host", 00:06:27.467 "nvmf_subsystem_remove_ns", 00:06:27.467 "nvmf_subsystem_add_ns", 00:06:27.467 "nvmf_subsystem_listener_set_ana_state", 00:06:27.468 "nvmf_discovery_get_referrals", 00:06:27.468 "nvmf_discovery_remove_referral", 00:06:27.468 "nvmf_discovery_add_referral", 00:06:27.468 "nvmf_subsystem_remove_listener", 00:06:27.468 "nvmf_subsystem_add_listener", 00:06:27.468 "nvmf_delete_subsystem", 00:06:27.468 "nvmf_create_subsystem", 00:06:27.468 "nvmf_get_subsystems", 00:06:27.468 "env_dpdk_get_mem_stats", 00:06:27.468 "nbd_get_disks", 00:06:27.468 "nbd_stop_disk", 00:06:27.468 "nbd_start_disk", 00:06:27.468 "ublk_recover_disk", 00:06:27.468 "ublk_get_disks", 00:06:27.468 "ublk_stop_disk", 00:06:27.468 "ublk_start_disk", 00:06:27.468 "ublk_destroy_target", 00:06:27.468 "ublk_create_target", 00:06:27.468 "virtio_blk_create_transport", 00:06:27.468 "virtio_blk_get_transports", 00:06:27.468 "vhost_controller_set_coalescing", 00:06:27.468 "vhost_get_controllers", 00:06:27.468 "vhost_delete_controller", 00:06:27.468 "vhost_create_blk_controller", 00:06:27.468 "vhost_scsi_controller_remove_target", 00:06:27.468 "vhost_scsi_controller_add_target", 00:06:27.468 "vhost_start_scsi_controller", 00:06:27.468 "vhost_create_scsi_controller", 00:06:27.468 "thread_set_cpumask", 00:06:27.468 "framework_get_governor", 00:06:27.468 "framework_get_scheduler", 00:06:27.468 "framework_set_scheduler", 00:06:27.468 "framework_get_reactors", 00:06:27.468 "thread_get_io_channels", 00:06:27.468 "thread_get_pollers", 00:06:27.468 "thread_get_stats", 00:06:27.468 "framework_monitor_context_switch", 00:06:27.468 "spdk_kill_instance", 00:06:27.468 "log_enable_timestamps", 00:06:27.468 "log_get_flags", 00:06:27.468 "log_clear_flag", 00:06:27.468 "log_set_flag", 00:06:27.468 "log_get_level", 00:06:27.468 "log_set_level", 00:06:27.468 "log_get_print_level", 00:06:27.468 "log_set_print_level", 00:06:27.468 "framework_enable_cpumask_locks", 00:06:27.468 "framework_disable_cpumask_locks", 00:06:27.468 "framework_wait_init", 00:06:27.468 "framework_start_init", 00:06:27.468 "scsi_get_devices", 00:06:27.468 "bdev_get_histogram", 00:06:27.468 "bdev_enable_histogram", 00:06:27.468 "bdev_set_qos_limit", 00:06:27.468 "bdev_set_qd_sampling_period", 00:06:27.468 "bdev_get_bdevs", 00:06:27.468 "bdev_reset_iostat", 00:06:27.468 "bdev_get_iostat", 00:06:27.468 "bdev_examine", 00:06:27.468 "bdev_wait_for_examine", 00:06:27.468 "bdev_set_options", 00:06:27.468 "notify_get_notifications", 00:06:27.468 "notify_get_types", 00:06:27.468 "accel_get_stats", 00:06:27.468 "accel_set_options", 00:06:27.468 "accel_set_driver", 00:06:27.468 "accel_crypto_key_destroy", 00:06:27.468 "accel_crypto_keys_get", 00:06:27.468 "accel_crypto_key_create", 00:06:27.468 "accel_assign_opc", 00:06:27.468 "accel_get_module_info", 00:06:27.468 "accel_get_opc_assignments", 00:06:27.468 "vmd_rescan", 00:06:27.468 "vmd_remove_device", 00:06:27.468 "vmd_enable", 00:06:27.468 "sock_get_default_impl", 00:06:27.468 "sock_set_default_impl", 00:06:27.468 "sock_impl_set_options", 00:06:27.468 "sock_impl_get_options", 00:06:27.468 "iobuf_get_stats", 00:06:27.468 "iobuf_set_options", 00:06:27.468 "keyring_get_keys", 00:06:27.468 "framework_get_pci_devices", 00:06:27.468 "framework_get_config", 00:06:27.468 "framework_get_subsystems", 00:06:27.468 "vfu_tgt_set_base_path", 00:06:27.468 "trace_get_info", 00:06:27.468 "trace_get_tpoint_group_mask", 00:06:27.468 "trace_disable_tpoint_group", 00:06:27.468 "trace_enable_tpoint_group", 00:06:27.468 "trace_clear_tpoint_mask", 00:06:27.468 "trace_set_tpoint_mask", 00:06:27.468 "spdk_get_version", 00:06:27.468 "rpc_get_methods" 00:06:27.468 ] 00:06:27.468 09:55:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:27.468 09:55:06 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:27.468 09:55:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.729 09:55:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:27.729 09:55:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1081856 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1081856 ']' 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1081856 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1081856 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1081856' 00:06:27.729 killing process with pid 1081856 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1081856 00:06:27.729 09:55:06 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1081856 00:06:27.990 00:06:27.990 real 0m1.404s 00:06:27.990 user 0m2.555s 00:06:27.990 sys 0m0.431s 00:06:27.990 09:55:06 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.990 09:55:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.990 ************************************ 00:06:27.990 END TEST spdkcli_tcp 00:06:27.990 ************************************ 00:06:27.990 09:55:06 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.990 09:55:06 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.990 09:55:06 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.990 09:55:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.990 ************************************ 00:06:27.990 START TEST dpdk_mem_utility 00:06:27.990 ************************************ 00:06:27.990 09:55:06 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:27.991 * Looking for test storage... 00:06:27.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:27.991 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:27.991 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1082138 00:06:27.991 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1082138 00:06:27.991 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:27.991 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1082138 ']' 00:06:27.991 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.991 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.991 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.991 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.991 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.991 [2024-07-25 09:55:07.108051] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:27.991 [2024-07-25 09:55:07.108124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082138 ] 00:06:28.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.252 [2024-07-25 09:55:07.171678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.252 [2024-07-25 09:55:07.246756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.823 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.823 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:28.823 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:28.823 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:28.823 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.823 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.823 { 00:06:28.823 "filename": "/tmp/spdk_mem_dump.txt" 00:06:28.823 } 00:06:28.823 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.823 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:28.823 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:28.823 1 heaps totaling size 814.000000 MiB 00:06:28.823 size: 814.000000 MiB heap id: 0 00:06:28.823 end heaps---------- 00:06:28.823 8 mempools totaling size 598.116089 MiB 00:06:28.823 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:28.823 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:28.823 size: 84.521057 MiB name: bdev_io_1082138 00:06:28.823 size: 51.011292 MiB name: evtpool_1082138 00:06:28.823 size: 50.003479 MiB name: msgpool_1082138 00:06:28.823 size: 21.763794 MiB name: PDU_Pool 00:06:28.823 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:28.823 size: 0.026123 MiB name: Session_Pool 00:06:28.823 end mempools------- 00:06:28.823 6 memzones totaling size 4.142822 MiB 00:06:28.823 size: 1.000366 MiB name: RG_ring_0_1082138 00:06:28.823 size: 1.000366 MiB name: RG_ring_1_1082138 00:06:28.824 size: 1.000366 MiB name: RG_ring_4_1082138 00:06:28.824 size: 1.000366 MiB name: RG_ring_5_1082138 00:06:28.824 size: 0.125366 MiB name: RG_ring_2_1082138 00:06:28.824 size: 0.015991 MiB name: RG_ring_3_1082138 00:06:28.824 end memzones------- 00:06:28.824 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:29.085 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:29.085 list of free elements. size: 12.519348 MiB 00:06:29.085 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:29.085 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:29.085 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:29.085 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:29.085 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:29.085 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:29.085 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:29.085 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:29.085 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:29.085 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:29.085 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:29.085 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:29.085 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:29.085 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:29.085 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:29.085 list of standard malloc elements. size: 199.218079 MiB 00:06:29.085 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:29.085 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:29.085 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:29.085 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:29.085 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:29.085 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:29.085 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:29.085 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:29.085 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:29.085 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:29.085 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:29.085 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:29.085 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:29.085 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:29.085 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:29.085 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:29.085 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:29.086 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:29.086 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:29.086 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:29.086 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:29.086 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:29.086 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:29.086 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:29.086 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:29.086 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:29.086 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:29.086 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:29.086 list of memzone associated elements. size: 602.262573 MiB 00:06:29.086 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:29.086 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:29.086 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:29.086 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:29.086 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:29.086 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1082138_0 00:06:29.086 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:29.086 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1082138_0 00:06:29.086 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:29.086 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1082138_0 00:06:29.086 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:29.086 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:29.086 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:29.086 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:29.086 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:29.086 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1082138 00:06:29.086 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:29.086 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1082138 00:06:29.086 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:29.086 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1082138 00:06:29.086 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:29.086 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:29.086 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:29.086 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:29.086 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:29.086 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:29.086 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:29.086 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:29.086 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:29.086 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1082138 00:06:29.086 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:29.086 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1082138 00:06:29.086 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:29.086 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1082138 00:06:29.086 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:29.086 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1082138 00:06:29.086 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:29.086 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1082138 00:06:29.086 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:29.086 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:29.086 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:29.086 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:29.086 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:29.086 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:29.086 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:29.086 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1082138 00:06:29.086 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:29.086 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:29.086 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:29.086 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:29.086 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:29.086 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1082138 00:06:29.086 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:29.086 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:29.086 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:29.086 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1082138 00:06:29.086 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:29.086 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1082138 00:06:29.086 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:29.086 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:29.086 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:29.086 09:55:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1082138 00:06:29.086 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1082138 ']' 00:06:29.086 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1082138 00:06:29.086 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:29.086 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.086 09:55:07 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1082138 00:06:29.086 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.086 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.086 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1082138' 00:06:29.086 killing process with pid 1082138 00:06:29.086 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1082138 00:06:29.086 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1082138 00:06:29.348 00:06:29.348 real 0m1.286s 00:06:29.348 user 0m1.353s 00:06:29.348 sys 0m0.373s 00:06:29.348 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.348 09:55:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:29.348 ************************************ 00:06:29.348 END TEST dpdk_mem_utility 00:06:29.348 ************************************ 00:06:29.348 09:55:08 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:29.348 09:55:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.348 09:55:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.348 09:55:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.348 ************************************ 00:06:29.348 START TEST event 00:06:29.348 ************************************ 00:06:29.348 09:55:08 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:29.348 * Looking for test storage... 00:06:29.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:29.348 09:55:08 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:29.348 09:55:08 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:29.348 09:55:08 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.348 09:55:08 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:29.348 09:55:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.348 09:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.348 ************************************ 00:06:29.348 START TEST event_perf 00:06:29.348 ************************************ 00:06:29.348 09:55:08 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:29.348 Running I/O for 1 seconds...[2024-07-25 09:55:08.470587] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:29.348 [2024-07-25 09:55:08.470695] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082420 ] 00:06:29.610 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.610 [2024-07-25 09:55:08.539405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.610 [2024-07-25 09:55:08.619668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.610 [2024-07-25 09:55:08.619784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.610 [2024-07-25 09:55:08.619942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.610 Running I/O for 1 seconds...[2024-07-25 09:55:08.619943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.553 00:06:30.554 lcore 0: 176022 00:06:30.554 lcore 1: 176024 00:06:30.554 lcore 2: 176021 00:06:30.554 lcore 3: 176023 00:06:30.554 done. 00:06:30.554 00:06:30.554 real 0m1.226s 00:06:30.554 user 0m4.147s 00:06:30.554 sys 0m0.074s 00:06:30.554 09:55:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.554 09:55:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.554 ************************************ 00:06:30.554 END TEST event_perf 00:06:30.554 ************************************ 00:06:30.815 09:55:09 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:30.815 09:55:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:30.815 09:55:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.815 09:55:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.815 ************************************ 00:06:30.815 START TEST event_reactor 00:06:30.815 ************************************ 00:06:30.815 09:55:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:30.815 [2024-07-25 09:55:09.770354] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:30.815 [2024-07-25 09:55:09.770444] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1082780 ] 00:06:30.815 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.815 [2024-07-25 09:55:09.834670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.815 [2024-07-25 09:55:09.902938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.253 test_start 00:06:32.253 oneshot 00:06:32.253 tick 100 00:06:32.253 tick 100 00:06:32.253 tick 250 00:06:32.253 tick 100 00:06:32.253 tick 100 00:06:32.253 tick 250 00:06:32.253 tick 100 00:06:32.253 tick 500 00:06:32.253 tick 100 00:06:32.253 tick 100 00:06:32.253 tick 250 00:06:32.253 tick 100 00:06:32.253 tick 100 00:06:32.253 test_end 00:06:32.253 00:06:32.253 real 0m1.209s 00:06:32.253 user 0m1.128s 00:06:32.253 sys 0m0.076s 00:06:32.253 09:55:10 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.253 09:55:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:32.253 ************************************ 00:06:32.253 END TEST event_reactor 00:06:32.253 ************************************ 00:06:32.253 09:55:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.253 09:55:10 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:32.253 09:55:10 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.253 09:55:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.253 ************************************ 00:06:32.253 START TEST event_reactor_perf 00:06:32.253 ************************************ 00:06:32.253 09:55:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.253 [2024-07-25 09:55:11.055812] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:32.253 [2024-07-25 09:55:11.055917] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083128 ] 00:06:32.253 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.253 [2024-07-25 09:55:11.130390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.253 [2024-07-25 09:55:11.199763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.211 test_start 00:06:33.211 test_end 00:06:33.211 Performance: 369692 events per second 00:06:33.212 00:06:33.212 real 0m1.218s 00:06:33.212 user 0m1.137s 00:06:33.212 sys 0m0.076s 00:06:33.212 09:55:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:33.212 09:55:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.212 ************************************ 00:06:33.212 END TEST event_reactor_perf 00:06:33.212 ************************************ 00:06:33.212 09:55:12 event -- event/event.sh@49 -- # uname -s 00:06:33.212 09:55:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:33.212 09:55:12 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.212 09:55:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:33.212 09:55:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:33.212 09:55:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:33.212 ************************************ 00:06:33.212 START TEST event_scheduler 00:06:33.212 ************************************ 00:06:33.212 09:55:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:33.473 * Looking for test storage... 00:06:33.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:33.473 09:55:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:33.473 09:55:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1083499 00:06:33.473 09:55:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.473 09:55:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:33.473 09:55:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1083499 00:06:33.473 09:55:12 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1083499 ']' 00:06:33.473 09:55:12 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.473 09:55:12 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:33.473 09:55:12 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.473 09:55:12 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:33.473 09:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.473 [2024-07-25 09:55:12.484283] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:33.473 [2024-07-25 09:55:12.484348] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1083499 ] 00:06:33.473 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.473 [2024-07-25 09:55:12.539183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:33.473 [2024-07-25 09:55:12.606413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.473 [2024-07-25 09:55:12.606570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.473 [2024-07-25 09:55:12.606726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.473 [2024-07-25 09:55:12.606728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:34.414 09:55:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 [2024-07-25 09:55:13.272790] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:34.414 [2024-07-25 09:55:13.272803] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.414 [2024-07-25 09:55:13.272810] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:34.414 [2024-07-25 09:55:13.272814] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:34.414 [2024-07-25 09:55:13.272818] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 [2024-07-25 09:55:13.327167] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 ************************************ 00:06:34.414 START TEST scheduler_create_thread 00:06:34.414 ************************************ 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 2 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 3 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 4 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 5 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 6 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 7 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 8 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.414 9 00:06:34.414 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.415 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.415 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.415 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.984 10 00:06:34.984 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.984 09:55:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.984 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.984 09:55:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.368 09:55:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.368 09:55:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:36.368 09:55:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:36.368 09:55:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.368 09:55:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.940 09:55:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.940 09:55:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:36.940 09:55:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.940 09:55:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.884 09:55:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:37.884 09:55:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:37.884 09:55:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:37.884 09:55:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:37.884 09:55:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.456 09:55:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.456 00:06:38.456 real 0m4.224s 00:06:38.456 user 0m0.026s 00:06:38.456 sys 0m0.005s 00:06:38.456 09:55:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.456 09:55:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.456 ************************************ 00:06:38.456 END TEST scheduler_create_thread 00:06:38.456 ************************************ 00:06:38.717 09:55:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:38.717 09:55:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1083499 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1083499 ']' 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1083499 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1083499 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1083499' 00:06:38.717 killing process with pid 1083499 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1083499 00:06:38.717 09:55:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1083499 00:06:38.979 [2024-07-25 09:55:17.968570] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:39.240 00:06:39.240 real 0m5.805s 00:06:39.240 user 0m13.682s 00:06:39.240 sys 0m0.381s 00:06:39.240 09:55:18 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.240 09:55:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.240 ************************************ 00:06:39.240 END TEST event_scheduler 00:06:39.240 ************************************ 00:06:39.240 09:55:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:39.240 09:55:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:39.240 09:55:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.240 09:55:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.240 09:55:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.240 ************************************ 00:06:39.240 START TEST app_repeat 00:06:39.240 ************************************ 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1084577 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1084577' 00:06:39.240 Process app_repeat pid: 1084577 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:39.240 spdk_app_start Round 0 00:06:39.240 09:55:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1084577 /var/tmp/spdk-nbd.sock 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1084577 ']' 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:39.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.240 09:55:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:39.240 [2024-07-25 09:55:18.260680] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:39.240 [2024-07-25 09:55:18.260738] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1084577 ] 00:06:39.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.240 [2024-07-25 09:55:18.321505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:39.501 [2024-07-25 09:55:18.388940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.501 [2024-07-25 09:55:18.388943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.074 09:55:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.074 09:55:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:40.074 09:55:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.074 Malloc0 00:06:40.074 09:55:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.335 Malloc1 00:06:40.335 09:55:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.335 09:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.596 /dev/nbd0 00:06:40.596 09:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.596 09:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.596 1+0 records in 00:06:40.596 1+0 records out 00:06:40.596 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021152 s, 19.4 MB/s 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:40.596 09:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:40.596 09:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.596 09:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.596 09:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:40.596 /dev/nbd1 00:06:40.857 09:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:40.857 09:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:40.857 09:55:19 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:40.857 09:55:19 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.858 1+0 records in 00:06:40.858 1+0 records out 00:06:40.858 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000252412 s, 16.2 MB/s 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:40.858 09:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.858 { 00:06:40.858 "nbd_device": "/dev/nbd0", 00:06:40.858 "bdev_name": "Malloc0" 00:06:40.858 }, 00:06:40.858 { 00:06:40.858 "nbd_device": "/dev/nbd1", 00:06:40.858 "bdev_name": "Malloc1" 00:06:40.858 } 00:06:40.858 ]' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.858 { 00:06:40.858 "nbd_device": "/dev/nbd0", 00:06:40.858 "bdev_name": "Malloc0" 00:06:40.858 }, 00:06:40.858 { 00:06:40.858 "nbd_device": "/dev/nbd1", 00:06:40.858 "bdev_name": "Malloc1" 00:06:40.858 } 00:06:40.858 ]' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.858 /dev/nbd1' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.858 /dev/nbd1' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.858 256+0 records in 00:06:40.858 256+0 records out 00:06:40.858 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124666 s, 84.1 MB/s 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.858 09:55:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.120 256+0 records in 00:06:41.120 256+0 records out 00:06:41.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163005 s, 64.3 MB/s 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.120 256+0 records in 00:06:41.120 256+0 records out 00:06:41.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178193 s, 58.8 MB/s 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.120 09:55:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.381 09:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.643 09:55:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.643 09:55:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.903 09:55:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:41.903 [2024-07-25 09:55:20.925297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.903 [2024-07-25 09:55:20.989771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.903 [2024-07-25 09:55:20.989773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.903 [2024-07-25 09:55:21.021121] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.903 [2024-07-25 09:55:21.021155] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.208 09:55:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.208 09:55:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:45.208 spdk_app_start Round 1 00:06:45.208 09:55:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1084577 /var/tmp/spdk-nbd.sock 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1084577 ']' 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.208 09:55:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.208 09:55:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.208 Malloc0 00:06:45.208 09:55:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.208 Malloc1 00:06:45.208 09:55:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.208 09:55:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.469 /dev/nbd0 00:06:45.469 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.469 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.469 1+0 records in 00:06:45.469 1+0 records out 00:06:45.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245333 s, 16.7 MB/s 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.469 09:55:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.470 09:55:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.470 09:55:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.470 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.470 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.470 09:55:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.731 /dev/nbd1 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.731 1+0 records in 00:06:45.731 1+0 records out 00:06:45.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239872 s, 17.1 MB/s 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:45.731 09:55:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.731 { 00:06:45.731 "nbd_device": "/dev/nbd0", 00:06:45.731 "bdev_name": "Malloc0" 00:06:45.731 }, 00:06:45.731 { 00:06:45.731 "nbd_device": "/dev/nbd1", 00:06:45.731 "bdev_name": "Malloc1" 00:06:45.731 } 00:06:45.731 ]' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.731 { 00:06:45.731 "nbd_device": "/dev/nbd0", 00:06:45.731 "bdev_name": "Malloc0" 00:06:45.731 }, 00:06:45.731 { 00:06:45.731 "nbd_device": "/dev/nbd1", 00:06:45.731 "bdev_name": "Malloc1" 00:06:45.731 } 00:06:45.731 ]' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.731 /dev/nbd1' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.731 /dev/nbd1' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.731 09:55:24 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.993 256+0 records in 00:06:45.993 256+0 records out 00:06:45.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116634 s, 89.9 MB/s 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.993 256+0 records in 00:06:45.993 256+0 records out 00:06:45.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.016466 s, 63.7 MB/s 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.993 256+0 records in 00:06:45.993 256+0 records out 00:06:45.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167333 s, 62.7 MB/s 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.993 09:55:24 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.993 09:55:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.255 09:55:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.516 09:55:25 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.516 09:55:25 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.777 09:55:25 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:46.777 [2024-07-25 09:55:25.804715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.777 [2024-07-25 09:55:25.869244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.777 [2024-07-25 09:55:25.869269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.777 [2024-07-25 09:55:25.901503] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.777 [2024-07-25 09:55:25.901536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.085 09:55:28 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:50.085 09:55:28 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.085 spdk_app_start Round 2 00:06:50.085 09:55:28 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1084577 /var/tmp/spdk-nbd.sock 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1084577 ']' 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.085 09:55:28 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:50.085 09:55:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.085 Malloc0 00:06:50.085 09:55:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.085 Malloc1 00:06:50.085 09:55:29 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.085 09:55:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.382 /dev/nbd0 00:06:50.382 09:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.382 09:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.382 1+0 records in 00:06:50.382 1+0 records out 00:06:50.382 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288053 s, 14.2 MB/s 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.382 09:55:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.383 09:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.383 09:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.383 09:55:29 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.383 /dev/nbd1 00:06:50.383 09:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.383 09:55:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:50.383 09:55:29 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.643 1+0 records in 00:06:50.643 1+0 records out 00:06:50.643 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228829 s, 17.9 MB/s 00:06:50.643 09:55:29 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.643 09:55:29 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:50.643 09:55:29 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:50.643 09:55:29 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:50.643 09:55:29 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.643 { 00:06:50.643 "nbd_device": "/dev/nbd0", 00:06:50.643 "bdev_name": "Malloc0" 00:06:50.643 }, 00:06:50.643 { 00:06:50.643 "nbd_device": "/dev/nbd1", 00:06:50.643 "bdev_name": "Malloc1" 00:06:50.643 } 00:06:50.643 ]' 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.643 { 00:06:50.643 "nbd_device": "/dev/nbd0", 00:06:50.643 "bdev_name": "Malloc0" 00:06:50.643 }, 00:06:50.643 { 00:06:50.643 "nbd_device": "/dev/nbd1", 00:06:50.643 "bdev_name": "Malloc1" 00:06:50.643 } 00:06:50.643 ]' 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.643 /dev/nbd1' 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.643 /dev/nbd1' 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.643 09:55:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.644 256+0 records in 00:06:50.644 256+0 records out 00:06:50.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116996 s, 89.6 MB/s 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.644 256+0 records in 00:06:50.644 256+0 records out 00:06:50.644 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159762 s, 65.6 MB/s 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.644 09:55:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.904 256+0 records in 00:06:50.904 256+0 records out 00:06:50.904 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170157 s, 61.6 MB/s 00:06:50.904 09:55:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:50.905 09:55:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.166 09:55:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.427 09:55:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.427 09:55:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:51.427 09:55:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:51.688 [2024-07-25 09:55:30.677233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.688 [2024-07-25 09:55:30.741995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.688 [2024-07-25 09:55:30.741998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.688 [2024-07-25 09:55:30.773368] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:51.688 [2024-07-25 09:55:30.773409] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:54.994 09:55:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1084577 /var/tmp/spdk-nbd.sock 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1084577 ']' 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:54.994 09:55:33 event.app_repeat -- event/event.sh@39 -- # killprocess 1084577 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1084577 ']' 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1084577 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1084577 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1084577' 00:06:54.994 killing process with pid 1084577 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1084577 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1084577 00:06:54.994 spdk_app_start is called in Round 0. 00:06:54.994 Shutdown signal received, stop current app iteration 00:06:54.994 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:54.994 spdk_app_start is called in Round 1. 00:06:54.994 Shutdown signal received, stop current app iteration 00:06:54.994 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:54.994 spdk_app_start is called in Round 2. 00:06:54.994 Shutdown signal received, stop current app iteration 00:06:54.994 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:54.994 spdk_app_start is called in Round 3. 00:06:54.994 Shutdown signal received, stop current app iteration 00:06:54.994 09:55:33 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:54.994 09:55:33 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:54.994 00:06:54.994 real 0m15.653s 00:06:54.994 user 0m33.706s 00:06:54.994 sys 0m2.170s 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.994 09:55:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 ************************************ 00:06:54.994 END TEST app_repeat 00:06:54.994 ************************************ 00:06:54.994 09:55:33 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:54.994 09:55:33 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.994 09:55:33 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.994 09:55:33 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.994 09:55:33 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 ************************************ 00:06:54.994 START TEST cpu_locks 00:06:54.994 ************************************ 00:06:54.994 09:55:33 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:54.994 * Looking for test storage... 00:06:54.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:54.994 09:55:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:54.994 09:55:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:54.994 09:55:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:54.994 09:55:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:54.994 09:55:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.994 09:55:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.994 09:55:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.994 ************************************ 00:06:54.994 START TEST default_locks 00:06:54.994 ************************************ 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1088088 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1088088 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1088088 ']' 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.994 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.256 [2024-07-25 09:55:34.161677] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:55.256 [2024-07-25 09:55:34.161757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088088 ] 00:06:55.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.256 [2024-07-25 09:55:34.226802] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.256 [2024-07-25 09:55:34.301107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.828 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.828 09:55:34 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:55.828 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1088088 00:06:55.828 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1088088 00:06:55.828 09:55:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.400 lslocks: write error 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1088088 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1088088 ']' 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1088088 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088088 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088088' 00:06:56.400 killing process with pid 1088088 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1088088 00:06:56.400 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1088088 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1088088 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1088088 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1088088 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1088088 ']' 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1088088) - No such process 00:06:56.661 ERROR: process (pid: 1088088) is no longer running 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:56.661 00:06:56.661 real 0m1.466s 00:06:56.661 user 0m1.537s 00:06:56.661 sys 0m0.524s 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.661 09:55:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.661 ************************************ 00:06:56.662 END TEST default_locks 00:06:56.662 ************************************ 00:06:56.662 09:55:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:56.662 09:55:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.662 09:55:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.662 09:55:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.662 ************************************ 00:06:56.662 START TEST default_locks_via_rpc 00:06:56.662 ************************************ 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1088374 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1088374 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1088374 ']' 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.662 09:55:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.662 [2024-07-25 09:55:35.691481] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:56.662 [2024-07-25 09:55:35.691539] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088374 ] 00:06:56.662 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.662 [2024-07-25 09:55:35.754273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.922 [2024-07-25 09:55:35.829379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1088374 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1088374 00:06:57.494 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.760 09:55:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1088374 00:06:57.760 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1088374 ']' 00:06:57.760 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1088374 00:06:57.760 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:57.760 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.760 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088374 00:06:58.021 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.021 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.021 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088374' 00:06:58.021 killing process with pid 1088374 00:06:58.021 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1088374 00:06:58.021 09:55:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1088374 00:06:58.283 00:06:58.283 real 0m1.520s 00:06:58.283 user 0m1.621s 00:06:58.283 sys 0m0.496s 00:06:58.283 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.283 09:55:37 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.283 ************************************ 00:06:58.283 END TEST default_locks_via_rpc 00:06:58.283 ************************************ 00:06:58.283 09:55:37 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:58.283 09:55:37 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.283 09:55:37 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.283 09:55:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.283 ************************************ 00:06:58.283 START TEST non_locking_app_on_locked_coremask 00:06:58.283 ************************************ 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1088711 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1088711 /var/tmp/spdk.sock 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1088711 ']' 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.283 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.284 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.284 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.284 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.284 [2024-07-25 09:55:37.283847] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.284 [2024-07-25 09:55:37.283902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088711 ] 00:06:58.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.284 [2024-07-25 09:55:37.346098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.546 [2024-07-25 09:55:37.419313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.118 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.118 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:59.118 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1088893 00:06:59.118 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1088893 /var/tmp/spdk2.sock 00:06:59.118 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:59.119 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1088893 ']' 00:06:59.119 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:59.119 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:59.119 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:59.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:59.119 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:59.119 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 [2024-07-25 09:55:38.111617] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.119 [2024-07-25 09:55:38.111685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1088893 ] 00:06:59.119 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.119 [2024-07-25 09:55:38.201482] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.119 [2024-07-25 09:55:38.201508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.380 [2024-07-25 09:55:38.330757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.952 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.952 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:59.952 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1088711 00:06:59.952 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1088711 00:06:59.952 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.215 lslocks: write error 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1088711 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1088711 ']' 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1088711 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088711 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088711' 00:07:00.215 killing process with pid 1088711 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1088711 00:07:00.215 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1088711 00:07:00.475 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1088893 00:07:00.475 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1088893 ']' 00:07:00.476 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1088893 00:07:00.476 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.476 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.476 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1088893 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1088893' 00:07:00.737 killing process with pid 1088893 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1088893 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1088893 00:07:00.737 00:07:00.737 real 0m2.610s 00:07:00.737 user 0m2.859s 00:07:00.737 sys 0m0.760s 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.737 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.737 ************************************ 00:07:00.737 END TEST non_locking_app_on_locked_coremask 00:07:00.737 ************************************ 00:07:00.998 09:55:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:00.998 09:55:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.998 09:55:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.998 09:55:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 ************************************ 00:07:00.998 START TEST locking_app_on_unlocked_coremask 00:07:00.998 ************************************ 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1089264 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1089264 /var/tmp/spdk.sock 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1089264 ']' 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.998 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 [2024-07-25 09:55:39.978401] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:00.998 [2024-07-25 09:55:39.978455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089264 ] 00:07:00.998 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.998 [2024-07-25 09:55:40.039571] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.998 [2024-07-25 09:55:40.039606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.998 [2024-07-25 09:55:40.111370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1089490 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1089490 /var/tmp/spdk2.sock 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1089490 ']' 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.942 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.942 [2024-07-25 09:55:40.785475] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.942 [2024-07-25 09:55:40.785531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089490 ] 00:07:01.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.942 [2024-07-25 09:55:40.873529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.942 [2024-07-25 09:55:41.008284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.514 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.514 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:02.514 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1089490 00:07:02.514 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.514 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1089490 00:07:03.085 lslocks: write error 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1089264 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1089264 ']' 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1089264 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1089264 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1089264' 00:07:03.085 killing process with pid 1089264 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1089264 00:07:03.085 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1089264 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1089490 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1089490 ']' 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1089490 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1089490 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1089490' 00:07:03.657 killing process with pid 1089490 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1089490 00:07:03.657 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1089490 00:07:03.918 00:07:03.918 real 0m2.932s 00:07:03.918 user 0m3.197s 00:07:03.918 sys 0m0.877s 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.918 ************************************ 00:07:03.918 END TEST locking_app_on_unlocked_coremask 00:07:03.918 ************************************ 00:07:03.918 09:55:42 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:03.918 09:55:42 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.918 09:55:42 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.918 09:55:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.918 ************************************ 00:07:03.918 START TEST locking_app_on_locked_coremask 00:07:03.918 ************************************ 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1089974 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1089974 /var/tmp/spdk.sock 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1089974 ']' 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.918 09:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.918 [2024-07-25 09:55:42.975462] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.918 [2024-07-25 09:55:42.975517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089974 ] 00:07:03.918 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.918 [2024-07-25 09:55:43.035461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.179 [2024-07-25 09:55:43.103264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1090026 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1090026 /var/tmp/spdk2.sock 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1090026 /var/tmp/spdk2.sock 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1090026 /var/tmp/spdk2.sock 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1090026 ']' 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:04.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.754 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.754 [2024-07-25 09:55:43.796921] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:04.754 [2024-07-25 09:55:43.796976] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090026 ] 00:07:04.754 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.754 [2024-07-25 09:55:43.886041] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1089974 has claimed it. 00:07:04.754 [2024-07-25 09:55:43.886080] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.327 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1090026) - No such process 00:07:05.327 ERROR: process (pid: 1090026) is no longer running 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1089974 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1089974 00:07:05.327 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.897 lslocks: write error 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1089974 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1089974 ']' 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1089974 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1089974 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1089974' 00:07:05.897 killing process with pid 1089974 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1089974 00:07:05.897 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1089974 00:07:06.163 00:07:06.163 real 0m2.175s 00:07:06.163 user 0m2.410s 00:07:06.163 sys 0m0.614s 00:07:06.163 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.163 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.163 ************************************ 00:07:06.163 END TEST locking_app_on_locked_coremask 00:07:06.163 ************************************ 00:07:06.163 09:55:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.163 09:55:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.163 09:55:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.163 09:55:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.163 ************************************ 00:07:06.163 START TEST locking_overlapped_coremask 00:07:06.163 ************************************ 00:07:06.163 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:06.163 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1090349 00:07:06.163 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1090349 /var/tmp/spdk.sock 00:07:06.163 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.163 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1090349 ']' 00:07:06.163 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.164 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.164 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.164 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.164 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.164 [2024-07-25 09:55:45.210305] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.164 [2024-07-25 09:55:45.210355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090349 ] 00:07:06.164 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.164 [2024-07-25 09:55:45.270078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:06.488 [2024-07-25 09:55:45.340299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.488 [2024-07-25 09:55:45.340570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.488 [2024-07-25 09:55:45.340574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1090685 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1090685 /var/tmp/spdk2.sock 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1090685 /var/tmp/spdk2.sock 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1090685 /var/tmp/spdk2.sock 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1090685 ']' 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.062 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.062 [2024-07-25 09:55:46.036612] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:07.062 [2024-07-25 09:55:46.036667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090685 ] 00:07:07.062 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.062 [2024-07-25 09:55:46.108893] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1090349 has claimed it. 00:07:07.062 [2024-07-25 09:55:46.108923] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1090685) - No such process 00:07:07.635 ERROR: process (pid: 1090685) is no longer running 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1090349 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1090349 ']' 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1090349 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1090349 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1090349' 00:07:07.635 killing process with pid 1090349 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1090349 00:07:07.635 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1090349 00:07:07.896 00:07:07.896 real 0m1.754s 00:07:07.896 user 0m4.989s 00:07:07.896 sys 0m0.350s 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.896 ************************************ 00:07:07.896 END TEST locking_overlapped_coremask 00:07:07.896 ************************************ 00:07:07.896 09:55:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:07.896 09:55:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:07.896 09:55:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.896 09:55:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.896 ************************************ 00:07:07.896 START TEST locking_overlapped_coremask_via_rpc 00:07:07.896 ************************************ 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1090735 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1090735 /var/tmp/spdk.sock 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1090735 ']' 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:07.896 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.897 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:07.897 09:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.158 [2024-07-25 09:55:47.053670] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.158 [2024-07-25 09:55:47.053726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090735 ] 00:07:08.158 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.158 [2024-07-25 09:55:47.114767] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.158 [2024-07-25 09:55:47.114799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.158 [2024-07-25 09:55:47.186112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.158 [2024-07-25 09:55:47.186228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.158 [2024-07-25 09:55:47.186252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1091061 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1091061 /var/tmp/spdk2.sock 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1091061 ']' 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:08.730 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.731 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:08.731 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.731 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:08.731 09:55:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.993 [2024-07-25 09:55:47.870421] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.993 [2024-07-25 09:55:47.870475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091061 ] 00:07:08.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.993 [2024-07-25 09:55:47.940697] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:08.993 [2024-07-25 09:55:47.940718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.993 [2024-07-25 09:55:48.052485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:08.993 [2024-07-25 09:55:48.052640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.993 [2024-07-25 09:55:48.052643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.565 [2024-07-25 09:55:48.648266] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1090735 has claimed it. 00:07:09.565 request: 00:07:09.565 { 00:07:09.565 "method": "framework_enable_cpumask_locks", 00:07:09.565 "req_id": 1 00:07:09.565 } 00:07:09.565 Got JSON-RPC error response 00:07:09.565 response: 00:07:09.565 { 00:07:09.565 "code": -32603, 00:07:09.565 "message": "Failed to claim CPU core: 2" 00:07:09.565 } 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1090735 /var/tmp/spdk.sock 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1090735 ']' 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.565 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1091061 /var/tmp/spdk2.sock 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1091061 ']' 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.827 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:10.088 00:07:10.088 real 0m2.011s 00:07:10.088 user 0m0.778s 00:07:10.088 sys 0m0.154s 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.088 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.088 ************************************ 00:07:10.088 END TEST locking_overlapped_coremask_via_rpc 00:07:10.088 ************************************ 00:07:10.088 09:55:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:10.088 09:55:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1090735 ]] 00:07:10.088 09:55:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1090735 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1090735 ']' 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1090735 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1090735 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1090735' 00:07:10.088 killing process with pid 1090735 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1090735 00:07:10.088 09:55:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1090735 00:07:10.349 09:55:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1091061 ]] 00:07:10.349 09:55:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1091061 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1091061 ']' 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1091061 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1091061 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1091061' 00:07:10.349 killing process with pid 1091061 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1091061 00:07:10.349 09:55:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1091061 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1090735 ]] 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1090735 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1090735 ']' 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1090735 00:07:10.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1090735) - No such process 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1090735 is not found' 00:07:10.611 Process with pid 1090735 is not found 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1091061 ]] 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1091061 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1091061 ']' 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1091061 00:07:10.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1091061) - No such process 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1091061 is not found' 00:07:10.611 Process with pid 1091061 is not found 00:07:10.611 09:55:49 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:10.611 00:07:10.611 real 0m15.603s 00:07:10.611 user 0m26.981s 00:07:10.611 sys 0m4.612s 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.611 09:55:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.611 ************************************ 00:07:10.611 END TEST cpu_locks 00:07:10.611 ************************************ 00:07:10.611 00:07:10.611 real 0m41.286s 00:07:10.611 user 1m21.007s 00:07:10.611 sys 0m7.766s 00:07:10.611 09:55:49 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:10.611 09:55:49 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.611 ************************************ 00:07:10.611 END TEST event 00:07:10.611 ************************************ 00:07:10.611 09:55:49 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.611 09:55:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:10.611 09:55:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.611 09:55:49 -- common/autotest_common.sh@10 -- # set +x 00:07:10.611 ************************************ 00:07:10.611 START TEST thread 00:07:10.611 ************************************ 00:07:10.611 09:55:49 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:10.873 * Looking for test storage... 00:07:10.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:10.873 09:55:49 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.873 09:55:49 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:10.873 09:55:49 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:10.873 09:55:49 thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.873 ************************************ 00:07:10.873 START TEST thread_poller_perf 00:07:10.873 ************************************ 00:07:10.873 09:55:49 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:10.873 [2024-07-25 09:55:49.824753] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.873 [2024-07-25 09:55:49.824854] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091497 ] 00:07:10.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.873 [2024-07-25 09:55:49.893850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.873 [2024-07-25 09:55:49.965042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.873 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:12.260 ====================================== 00:07:12.260 busy:2408042380 (cyc) 00:07:12.260 total_run_count: 287000 00:07:12.260 tsc_hz: 2400000000 (cyc) 00:07:12.260 ====================================== 00:07:12.260 poller_cost: 8390 (cyc), 3495 (nsec) 00:07:12.260 00:07:12.260 real 0m1.225s 00:07:12.260 user 0m1.142s 00:07:12.260 sys 0m0.078s 00:07:12.260 09:55:51 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.260 09:55:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:12.260 ************************************ 00:07:12.260 END TEST thread_poller_perf 00:07:12.260 ************************************ 00:07:12.260 09:55:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.260 09:55:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:12.260 09:55:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.260 09:55:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:12.260 ************************************ 00:07:12.260 START TEST thread_poller_perf 00:07:12.260 ************************************ 00:07:12.260 09:55:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:12.260 [2024-07-25 09:55:51.125998] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:12.260 [2024-07-25 09:55:51.126103] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091851 ] 00:07:12.260 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.260 [2024-07-25 09:55:51.188485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.260 [2024-07-25 09:55:51.257562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.260 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:13.203 ====================================== 00:07:13.203 busy:2402016196 (cyc) 00:07:13.203 total_run_count: 3810000 00:07:13.203 tsc_hz: 2400000000 (cyc) 00:07:13.203 ====================================== 00:07:13.203 poller_cost: 630 (cyc), 262 (nsec) 00:07:13.203 00:07:13.203 real 0m1.206s 00:07:13.203 user 0m1.138s 00:07:13.203 sys 0m0.064s 00:07:13.203 09:55:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.203 09:55:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.203 ************************************ 00:07:13.203 END TEST thread_poller_perf 00:07:13.203 ************************************ 00:07:13.465 09:55:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:13.465 00:07:13.465 real 0m2.683s 00:07:13.465 user 0m2.365s 00:07:13.465 sys 0m0.324s 00:07:13.465 09:55:52 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.465 09:55:52 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.465 ************************************ 00:07:13.465 END TEST thread 00:07:13.465 ************************************ 00:07:13.465 09:55:52 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:13.465 09:55:52 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.465 09:55:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:13.465 09:55:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.465 09:55:52 -- common/autotest_common.sh@10 -- # set +x 00:07:13.465 ************************************ 00:07:13.465 START TEST app_cmdline 00:07:13.465 ************************************ 00:07:13.465 09:55:52 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:13.465 * Looking for test storage... 00:07:13.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:13.465 09:55:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:13.465 09:55:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1092151 00:07:13.465 09:55:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1092151 00:07:13.466 09:55:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:13.466 09:55:52 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1092151 ']' 00:07:13.466 09:55:52 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.466 09:55:52 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:13.466 09:55:52 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.466 09:55:52 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:13.466 09:55:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:13.466 [2024-07-25 09:55:52.578630] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:13.466 [2024-07-25 09:55:52.578703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092151 ] 00:07:13.727 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.727 [2024-07-25 09:55:52.645047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.727 [2024-07-25 09:55:52.720488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.299 09:55:53 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:14.299 09:55:53 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:14.299 09:55:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:14.560 { 00:07:14.560 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:14.560 "fields": { 00:07:14.560 "major": 24, 00:07:14.560 "minor": 9, 00:07:14.560 "patch": 0, 00:07:14.560 "suffix": "-pre", 00:07:14.560 "commit": "704257090" 00:07:14.560 } 00:07:14.560 } 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:14.560 09:55:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:14.560 09:55:53 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:14.821 request: 00:07:14.821 { 00:07:14.821 "method": "env_dpdk_get_mem_stats", 00:07:14.821 "req_id": 1 00:07:14.821 } 00:07:14.821 Got JSON-RPC error response 00:07:14.821 response: 00:07:14.821 { 00:07:14.821 "code": -32601, 00:07:14.821 "message": "Method not found" 00:07:14.821 } 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:14.821 09:55:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1092151 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1092151 ']' 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1092151 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1092151 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1092151' 00:07:14.821 killing process with pid 1092151 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@969 -- # kill 1092151 00:07:14.821 09:55:53 app_cmdline -- common/autotest_common.sh@974 -- # wait 1092151 00:07:15.082 00:07:15.082 real 0m1.559s 00:07:15.082 user 0m1.880s 00:07:15.082 sys 0m0.403s 00:07:15.082 09:55:53 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.082 09:55:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.082 ************************************ 00:07:15.082 END TEST app_cmdline 00:07:15.082 ************************************ 00:07:15.082 09:55:54 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.082 09:55:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.082 09:55:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.082 09:55:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.082 ************************************ 00:07:15.082 START TEST version 00:07:15.082 ************************************ 00:07:15.082 09:55:54 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:15.082 * Looking for test storage... 00:07:15.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:15.082 09:55:54 version -- app/version.sh@17 -- # get_header_version major 00:07:15.082 09:55:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.082 09:55:54 version -- app/version.sh@14 -- # cut -f2 00:07:15.082 09:55:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.082 09:55:54 version -- app/version.sh@17 -- # major=24 00:07:15.082 09:55:54 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.082 09:55:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.082 09:55:54 version -- app/version.sh@14 -- # cut -f2 00:07:15.083 09:55:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.083 09:55:54 version -- app/version.sh@18 -- # minor=9 00:07:15.083 09:55:54 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.083 09:55:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.083 09:55:54 version -- app/version.sh@14 -- # cut -f2 00:07:15.083 09:55:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.083 09:55:54 version -- app/version.sh@19 -- # patch=0 00:07:15.083 09:55:54 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.083 09:55:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:15.083 09:55:54 version -- app/version.sh@14 -- # cut -f2 00:07:15.083 09:55:54 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.083 09:55:54 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.083 09:55:54 version -- app/version.sh@22 -- # version=24.9 00:07:15.083 09:55:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.083 09:55:54 version -- app/version.sh@28 -- # version=24.9rc0 00:07:15.083 09:55:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:15.083 09:55:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.343 09:55:54 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:15.343 09:55:54 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:15.343 00:07:15.343 real 0m0.178s 00:07:15.343 user 0m0.092s 00:07:15.343 sys 0m0.122s 00:07:15.343 09:55:54 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.343 09:55:54 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 ************************************ 00:07:15.343 END TEST version 00:07:15.343 ************************************ 00:07:15.343 09:55:54 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@202 -- # uname -s 00:07:15.343 09:55:54 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:15.343 09:55:54 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:15.343 09:55:54 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:15.343 09:55:54 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:15.343 09:55:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.343 09:55:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 09:55:54 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:15.343 09:55:54 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:15.343 09:55:54 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.343 09:55:54 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.343 09:55:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.343 09:55:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 ************************************ 00:07:15.343 START TEST nvmf_tcp 00:07:15.343 ************************************ 00:07:15.343 09:55:54 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:15.343 * Looking for test storage... 00:07:15.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:15.343 09:55:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:15.343 09:55:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.343 09:55:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:15.343 09:55:54 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.343 09:55:54 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.344 09:55:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:15.605 ************************************ 00:07:15.605 START TEST nvmf_target_core 00:07:15.605 ************************************ 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:15.605 * Looking for test storage... 00:07:15.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.605 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.606 ************************************ 00:07:15.606 START TEST nvmf_abort 00:07:15.606 ************************************ 00:07:15.606 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:15.867 * Looking for test storage... 00:07:15.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.867 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:15.868 09:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:22.455 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:22.455 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:22.455 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:22.455 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:22.455 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:22.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:22.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:07:22.716 00:07:22.716 --- 10.0.0.2 ping statistics --- 00:07:22.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.716 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:07:22.716 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:22.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:22.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.384 ms 00:07:22.977 00:07:22.977 --- 10.0.0.1 ping statistics --- 00:07:22.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:22.977 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1096381 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1096381 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1096381 ']' 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.977 09:56:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:22.977 [2024-07-25 09:56:01.954120] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:22.977 [2024-07-25 09:56:01.954169] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:22.977 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.977 [2024-07-25 09:56:02.036554] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.239 [2024-07-25 09:56:02.115678] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.239 [2024-07-25 09:56:02.115737] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.239 [2024-07-25 09:56:02.115745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.239 [2024-07-25 09:56:02.115752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.239 [2024-07-25 09:56:02.115758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.239 [2024-07-25 09:56:02.115891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.239 [2024-07-25 09:56:02.116058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.239 [2024-07-25 09:56:02.116060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.811 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.811 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:23.811 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.811 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.811 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 [2024-07-25 09:56:02.778147] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 Malloc0 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 Delay0 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 [2024-07-25 09:56:02.858563] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.812 09:56:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:23.812 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.073 [2024-07-25 09:56:03.010455] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:26.665 Initializing NVMe Controllers 00:07:26.665 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:26.665 controller IO queue size 128 less than required 00:07:26.665 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:26.665 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:26.665 Initialization complete. Launching workers. 00:07:26.665 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 124, failed: 27633 00:07:26.665 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27695, failed to submit 62 00:07:26.665 success 27637, unsuccess 58, failed 0 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:26.665 rmmod nvme_tcp 00:07:26.665 rmmod nvme_fabrics 00:07:26.665 rmmod nvme_keyring 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1096381 ']' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1096381 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1096381 ']' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1096381 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1096381 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1096381' 00:07:26.665 killing process with pid 1096381 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1096381 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1096381 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:26.665 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:28.581 00:07:28.581 real 0m12.839s 00:07:28.581 user 0m13.885s 00:07:28.581 sys 0m6.150s 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:28.581 ************************************ 00:07:28.581 END TEST nvmf_abort 00:07:28.581 ************************************ 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:28.581 ************************************ 00:07:28.581 START TEST nvmf_ns_hotplug_stress 00:07:28.581 ************************************ 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:28.581 * Looking for test storage... 00:07:28.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.581 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:28.844 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:36.992 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:36.992 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:36.992 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:36.992 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.992 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.993 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.993 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.650 ms 00:07:36.993 00:07:36.993 --- 10.0.0.2 ping statistics --- 00:07:36.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.993 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.993 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.993 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.418 ms 00:07:36.993 00:07:36.993 --- 10.0.0.1 ping statistics --- 00:07:36.993 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.993 rtt min/avg/max/mdev = 0.418/0.418/0.418/0.000 ms 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:36.993 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1101407 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1101407 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1101407 ']' 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.993 [2024-07-25 09:56:15.063428] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:36.993 [2024-07-25 09:56:15.063497] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.993 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.993 [2024-07-25 09:56:15.151846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.993 [2024-07-25 09:56:15.245008] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.993 [2024-07-25 09:56:15.245068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.993 [2024-07-25 09:56:15.245075] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.993 [2024-07-25 09:56:15.245082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.993 [2024-07-25 09:56:15.245089] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.993 [2024-07-25 09:56:15.245237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:36.993 [2024-07-25 09:56:15.245478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.993 [2024-07-25 09:56:15.245478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:36.993 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.993 [2024-07-25 09:56:16.030534] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.993 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.255 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.255 [2024-07-25 09:56:16.373083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.515 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:37.515 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:37.776 Malloc0 00:07:37.776 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:37.776 Delay0 00:07:38.037 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.037 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:38.298 NULL1 00:07:38.298 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:38.298 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1101782 00:07:38.298 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:38.298 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:38.298 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.559 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.503 Read completed with error (sct=0, sc=11) 00:07:39.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.503 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.764 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:39.764 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:40.025 true 00:07:40.025 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:40.025 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.968 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.968 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:40.968 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:40.968 true 00:07:41.229 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:41.229 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.229 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.490 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:41.490 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:41.490 true 00:07:41.491 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:41.491 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.877 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.877 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.877 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:42.877 09:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:43.137 true 00:07:43.137 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:43.138 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.081 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.081 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.081 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:44.081 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:44.343 true 00:07:44.343 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:44.343 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.343 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.607 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:44.607 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:44.868 true 00:07:44.868 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:44.868 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.868 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.128 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:45.128 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:45.128 true 00:07:45.389 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:45.389 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.389 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.650 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:45.650 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:45.650 true 00:07:45.650 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:45.650 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.912 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.173 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:46.173 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:46.173 true 00:07:46.173 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:46.173 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.434 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.721 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:46.721 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:46.721 true 00:07:46.721 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:46.721 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.982 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.982 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:46.982 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:47.243 true 00:07:47.243 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:47.243 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.504 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.504 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:47.504 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:47.765 true 00:07:47.765 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:47.765 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.026 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:48.026 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:48.026 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:48.287 true 00:07:48.287 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:48.287 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.230 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.230 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:49.230 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:49.491 true 00:07:49.491 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:49.491 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.491 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.752 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:49.752 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:50.013 true 00:07:50.013 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:50.013 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.013 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.274 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:50.274 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:50.274 true 00:07:50.536 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:50.536 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.536 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.797 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:50.797 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:50.797 true 00:07:50.797 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:50.797 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.058 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.319 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:51.319 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:51.319 true 00:07:51.319 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:51.319 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.580 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.840 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:51.840 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:51.840 true 00:07:51.840 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:51.840 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.103 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.103 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:52.103 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:52.364 true 00:07:52.364 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:52.364 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.624 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.624 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:52.624 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:52.885 true 00:07:52.885 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:52.885 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.145 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.145 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:53.145 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:53.406 true 00:07:53.406 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:53.406 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.364 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.364 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:54.364 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:54.364 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:54.625 true 00:07:54.625 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:54.625 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.625 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.885 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:54.885 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:55.146 true 00:07:55.147 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:55.147 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.147 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.407 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:55.407 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:55.668 true 00:07:55.668 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:55.668 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.668 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.928 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:55.929 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:55.929 true 00:07:56.189 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:56.189 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.189 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.450 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:56.450 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:56.450 true 00:07:56.450 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:56.450 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.391 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.652 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:57.652 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:57.652 true 00:07:57.652 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:57.652 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.912 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.172 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:58.172 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:58.172 true 00:07:58.172 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:58.172 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.432 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.693 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:58.693 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:58.693 true 00:07:58.693 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:58.693 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.953 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.953 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:58.953 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:59.213 true 00:07:59.213 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:59.213 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.473 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.473 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:59.473 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:59.734 true 00:07:59.734 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:07:59.734 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.676 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.676 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:08:00.676 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:08:00.937 true 00:08:00.937 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:00.937 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.232 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.232 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:08:01.232 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:08:01.493 true 00:08:01.493 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:01.493 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.493 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.754 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:08:01.754 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:08:01.754 true 00:08:01.754 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:01.754 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.015 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.277 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:08:02.277 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:08:02.277 true 00:08:02.277 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:02.277 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.538 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.798 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:08:02.798 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:08:02.798 true 00:08:02.798 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:02.798 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:03.742 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.742 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.004 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:08:04.004 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:08:04.004 true 00:08:04.265 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:04.265 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.265 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.525 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:08:04.525 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:08:04.525 true 00:08:04.525 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:04.525 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.785 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.045 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:08:05.045 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:08:05.045 true 00:08:05.045 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:05.045 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.306 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.568 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:08:05.568 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:08:05.568 true 00:08:05.568 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:05.568 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.829 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.090 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:08:06.090 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:08:06.090 true 00:08:06.090 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:06.090 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.351 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.612 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:08:06.613 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:08:06.613 true 00:08:06.613 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:06.613 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.874 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.874 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:08:06.874 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:08:07.135 true 00:08:07.135 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:07.135 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.079 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.079 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:08:08.079 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:08:08.340 true 00:08:08.340 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:08.340 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.601 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.601 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:08:08.601 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:08:08.601 Initializing NVMe Controllers 00:08:08.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:08.601 Controller IO queue size 128, less than required. 00:08:08.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.601 Controller IO queue size 128, less than required. 00:08:08.601 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:08.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:08.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:08.601 Initialization complete. Launching workers. 00:08:08.601 ======================================================== 00:08:08.601 Latency(us) 00:08:08.602 Device Information : IOPS MiB/s Average min max 00:08:08.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 692.91 0.34 68709.02 2226.40 1144237.80 00:08:08.602 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11446.24 5.59 11182.41 2169.77 407321.53 00:08:08.602 ======================================================== 00:08:08.602 Total : 12139.14 5.93 14466.04 2169.77 1144237.80 00:08:08.602 00:08:08.863 true 00:08:08.863 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1101782 00:08:08.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1101782) - No such process 00:08:08.863 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1101782 00:08:08.863 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.863 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:09.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:09.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:09.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:09.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:09.386 null0 00:08:09.386 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.386 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.386 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:09.386 null1 00:08:09.386 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.386 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.386 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:09.647 null2 00:08:09.647 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.647 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.647 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:09.647 null3 00:08:09.908 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.908 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.908 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:09.908 null4 00:08:09.908 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:09.908 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:09.908 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:10.169 null5 00:08:10.169 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.169 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.169 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:10.169 null6 00:08:10.169 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.169 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.169 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:10.431 null7 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.431 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1108300 1108303 1108304 1108306 1108309 1108312 1108314 1108316 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.432 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.693 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:10.955 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.217 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.479 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:11.740 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:11.740 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:11.740 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.740 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:11.741 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.003 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.003 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.265 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.527 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:12.528 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:12.789 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.050 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.050 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.050 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.050 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.050 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.050 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.050 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.050 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.050 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.050 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.051 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.312 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:13.574 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.835 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:13.836 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.097 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.097 rmmod nvme_tcp 00:08:14.097 rmmod nvme_fabrics 00:08:14.097 rmmod nvme_keyring 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1101407 ']' 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1101407 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1101407 ']' 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1101407 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1101407 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1101407' 00:08:14.097 killing process with pid 1101407 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1101407 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1101407 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.097 09:56:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.689 00:08:16.689 real 0m47.697s 00:08:16.689 user 3m8.681s 00:08:16.689 sys 0m15.147s 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 ************************************ 00:08:16.689 END TEST nvmf_ns_hotplug_stress 00:08:16.689 ************************************ 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 ************************************ 00:08:16.689 START TEST nvmf_delete_subsystem 00:08:16.689 ************************************ 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:16.689 * Looking for test storage... 00:08:16.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.689 09:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:23.283 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:23.283 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.283 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:23.284 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:23.284 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.284 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.545 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.545 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.545 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.714 ms 00:08:23.545 00:08:23.545 --- 10.0.0.2 ping statistics --- 00:08:23.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.545 rtt min/avg/max/mdev = 0.714/0.714/0.714/0.000 ms 00:08:23.545 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:08:23.546 00:08:23.546 --- 10.0.0.1 ping statistics --- 00:08:23.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.546 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1113565 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1113565 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1113565 ']' 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:23.546 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:23.546 [2024-07-25 09:57:02.597230] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:23.546 [2024-07-25 09:57:02.597302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.546 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.546 [2024-07-25 09:57:02.667798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:23.807 [2024-07-25 09:57:02.741831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.807 [2024-07-25 09:57:02.741871] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.807 [2024-07-25 09:57:02.741879] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.807 [2024-07-25 09:57:02.741885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.807 [2024-07-25 09:57:02.741891] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.807 [2024-07-25 09:57:02.742038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.807 [2024-07-25 09:57:02.742039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.378 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 [2024-07-25 09:57:03.409755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 [2024-07-25 09:57:03.433928] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 NULL1 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 Delay0 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1113770 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:24.379 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:24.379 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.640 [2024-07-25 09:57:03.530604] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:26.555 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.555 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.555 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 starting I/O failed: -6 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 [2024-07-25 09:57:05.799683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d710 is same with the state(5) to be set 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.816 Write completed with error (sct=0, sc=8) 00:08:26.816 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 starting I/O failed: -6 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 [2024-07-25 09:57:05.803223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f91bc000c00 is same with the state(5) to be set 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Write completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:26.817 Read completed with error (sct=0, sc=8) 00:08:27.761 [2024-07-25 09:57:06.755257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64eac0 is same with the state(5) to be set 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 [2024-07-25 09:57:06.803259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64da40 is same with the state(5) to be set 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 [2024-07-25 09:57:06.803699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64d3e0 is same with the state(5) to be set 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 [2024-07-25 09:57:06.805210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f91bc00d000 is same with the state(5) to be set 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Write completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 Read completed with error (sct=0, sc=8) 00:08:27.761 [2024-07-25 09:57:06.805354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f91bc00d7a0 is same with the state(5) to be set 00:08:27.761 Initializing NVMe Controllers 00:08:27.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:27.761 Controller IO queue size 128, less than required. 00:08:27.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:27.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:27.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:27.761 Initialization complete. Launching workers. 00:08:27.761 ======================================================== 00:08:27.761 Latency(us) 00:08:27.761 Device Information : IOPS MiB/s Average min max 00:08:27.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.78 0.08 888007.20 225.54 1007197.54 00:08:27.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.35 0.08 1017458.11 299.32 2002422.75 00:08:27.761 ======================================================== 00:08:27.761 Total : 328.13 0.16 949295.04 225.54 2002422.75 00:08:27.761 00:08:27.761 [2024-07-25 09:57:06.805891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64eac0 (9): Bad file descriptor 00:08:27.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:27.761 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.761 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:27.761 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1113770 00:08:27.761 09:57:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1113770 00:08:28.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1113770) - No such process 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1113770 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1113770 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1113770 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.333 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 [2024-07-25 09:57:07.335416] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1114595 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:28.334 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:28.334 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.334 [2024-07-25 09:57:07.407208] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:28.906 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:28.906 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:28.906 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.603 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.603 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:29.603 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:29.865 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:29.865 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:29.865 09:57:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:30.436 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:30.436 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:30.436 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.008 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.008 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:31.008 09:57:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.269 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.269 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:31.269 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:31.530 Initializing NVMe Controllers 00:08:31.530 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:31.530 Controller IO queue size 128, less than required. 00:08:31.530 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:31.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:31.530 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:31.530 Initialization complete. Launching workers. 00:08:31.530 ======================================================== 00:08:31.530 Latency(us) 00:08:31.530 Device Information : IOPS MiB/s Average min max 00:08:31.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002592.47 1000341.67 1041891.97 00:08:31.530 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004212.55 1000473.36 1009971.45 00:08:31.530 ======================================================== 00:08:31.530 Total : 256.00 0.12 1003402.51 1000341.67 1041891.97 00:08:31.530 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1114595 00:08:31.791 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1114595) - No such process 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1114595 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:31.791 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:31.791 rmmod nvme_tcp 00:08:31.791 rmmod nvme_fabrics 00:08:32.053 rmmod nvme_keyring 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1113565 ']' 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1113565 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1113565 ']' 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1113565 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:32.053 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1113565 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1113565' 00:08:32.053 killing process with pid 1113565 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1113565 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1113565 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.053 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:34.600 00:08:34.600 real 0m17.843s 00:08:34.600 user 0m30.998s 00:08:34.600 sys 0m6.147s 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 ************************************ 00:08:34.600 END TEST nvmf_delete_subsystem 00:08:34.600 ************************************ 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:34.600 ************************************ 00:08:34.600 START TEST nvmf_host_management 00:08:34.600 ************************************ 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:34.600 * Looking for test storage... 00:08:34.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:34.600 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:34.601 09:57:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:41.256 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:41.257 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:41.257 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:41.257 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:41.257 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.257 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:08:41.519 00:08:41.519 --- 10.0.0.2 ping statistics --- 00:08:41.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.519 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.426 ms 00:08:41.519 00:08:41.519 --- 10.0.0.1 ping statistics --- 00:08:41.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.519 rtt min/avg/max/mdev = 0.426/0.426/0.426/0.000 ms 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1119999 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1119999 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1119999 ']' 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:41.519 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:41.781 [2024-07-25 09:57:20.654235] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:41.781 [2024-07-25 09:57:20.654285] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.781 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.781 [2024-07-25 09:57:20.738342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:41.781 [2024-07-25 09:57:20.830677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:41.781 [2024-07-25 09:57:20.830737] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:41.781 [2024-07-25 09:57:20.830746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:41.781 [2024-07-25 09:57:20.830753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:41.781 [2024-07-25 09:57:20.830760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:41.781 [2024-07-25 09:57:20.830899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:41.781 [2024-07-25 09:57:20.831067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:41.781 [2024-07-25 09:57:20.831250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.781 [2024-07-25 09:57:20.831250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.353 [2024-07-25 09:57:21.473143] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:42.353 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 Malloc0 00:08:42.615 [2024-07-25 09:57:21.536446] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1120124 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1120124 /var/tmp/bdevperf.sock 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1120124 ']' 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:42.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:42.615 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:42.616 { 00:08:42.616 "params": { 00:08:42.616 "name": "Nvme$subsystem", 00:08:42.616 "trtype": "$TEST_TRANSPORT", 00:08:42.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:42.616 "adrfam": "ipv4", 00:08:42.616 "trsvcid": "$NVMF_PORT", 00:08:42.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:42.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:42.616 "hdgst": ${hdgst:-false}, 00:08:42.616 "ddgst": ${ddgst:-false} 00:08:42.616 }, 00:08:42.616 "method": "bdev_nvme_attach_controller" 00:08:42.616 } 00:08:42.616 EOF 00:08:42.616 )") 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:42.616 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:42.616 "params": { 00:08:42.616 "name": "Nvme0", 00:08:42.616 "trtype": "tcp", 00:08:42.616 "traddr": "10.0.0.2", 00:08:42.616 "adrfam": "ipv4", 00:08:42.616 "trsvcid": "4420", 00:08:42.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:42.616 "hdgst": false, 00:08:42.616 "ddgst": false 00:08:42.616 }, 00:08:42.616 "method": "bdev_nvme_attach_controller" 00:08:42.616 }' 00:08:42.616 [2024-07-25 09:57:21.635627] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:42.616 [2024-07-25 09:57:21.635680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120124 ] 00:08:42.616 EAL: No free 2048 kB hugepages reported on node 1 00:08:42.616 [2024-07-25 09:57:21.694726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.877 [2024-07-25 09:57:21.760268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.877 Running I/O for 10 seconds... 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=591 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 591 -ge 100 ']' 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.452 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.452 [2024-07-25 09:57:22.479506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.452 [2024-07-25 09:57:22.479799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.479973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x175e2a0 is same with the state(5) to be set 00:08:43.453 [2024-07-25 09:57:22.483613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.483986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.483993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.484002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.484009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.484019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.484026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.484035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.484042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.484051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.484058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.453 [2024-07-25 09:57:22.484067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.453 [2024-07-25 09:57:22.484075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.454 [2024-07-25 09:57:22.484505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:43.454 [2024-07-25 09:57:22.484711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:43.454 [2024-07-25 09:57:22.484719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28434f0 is same with the state(5) to be set 00:08:43.455 [2024-07-25 09:57:22.484758] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x28434f0 was disconnected and freed. reset controller. 00:08:43.455 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:43.455 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.455 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:43.455 [2024-07-25 09:57:22.485971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:43.455 task offset: 83328 on job bdev=Nvme0n1 fails 00:08:43.455 00:08:43.455 Latency(us) 00:08:43.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.455 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:43.455 Job: Nvme0n1 ended in about 0.50 seconds with error 00:08:43.455 Verification LBA range: start 0x0 length 0x400 00:08:43.455 Nvme0n1 : 0.50 1301.28 81.33 127.93 0.00 43620.80 1570.13 34515.63 00:08:43.455 =================================================================================================================== 00:08:43.455 Total : 1301.28 81.33 127.93 0.00 43620.80 1570.13 34515.63 00:08:43.455 [2024-07-25 09:57:22.487989] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.455 [2024-07-25 09:57:22.488012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24323b0 (9): Bad file descriptor 00:08:43.455 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.455 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:43.455 [2024-07-25 09:57:22.544459] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1120124 00:08:44.397 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1120124) - No such process 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:44.397 { 00:08:44.397 "params": { 00:08:44.397 "name": "Nvme$subsystem", 00:08:44.397 "trtype": "$TEST_TRANSPORT", 00:08:44.397 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:44.397 "adrfam": "ipv4", 00:08:44.397 "trsvcid": "$NVMF_PORT", 00:08:44.397 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:44.397 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:44.397 "hdgst": ${hdgst:-false}, 00:08:44.397 "ddgst": ${ddgst:-false} 00:08:44.397 }, 00:08:44.397 "method": "bdev_nvme_attach_controller" 00:08:44.397 } 00:08:44.397 EOF 00:08:44.397 )") 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:44.397 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:44.397 "params": { 00:08:44.397 "name": "Nvme0", 00:08:44.397 "trtype": "tcp", 00:08:44.397 "traddr": "10.0.0.2", 00:08:44.397 "adrfam": "ipv4", 00:08:44.397 "trsvcid": "4420", 00:08:44.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:44.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:44.397 "hdgst": false, 00:08:44.397 "ddgst": false 00:08:44.397 }, 00:08:44.397 "method": "bdev_nvme_attach_controller" 00:08:44.397 }' 00:08:44.658 [2024-07-25 09:57:23.560340] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:44.658 [2024-07-25 09:57:23.560394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120499 ] 00:08:44.658 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.658 [2024-07-25 09:57:23.619256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.658 [2024-07-25 09:57:23.682809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.919 Running I/O for 1 seconds... 00:08:45.862 00:08:45.862 Latency(us) 00:08:45.862 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.862 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:45.862 Verification LBA range: start 0x0 length 0x400 00:08:45.862 Nvme0n1 : 1.05 1155.64 72.23 0.00 0.00 54552.05 13380.27 49807.36 00:08:45.862 =================================================================================================================== 00:08:45.862 Total : 1155.64 72.23 0.00 0.00 54552.05 13380.27 49807.36 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:46.124 rmmod nvme_tcp 00:08:46.124 rmmod nvme_fabrics 00:08:46.124 rmmod nvme_keyring 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1119999 ']' 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1119999 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1119999 ']' 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1119999 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1119999 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1119999' 00:08:46.124 killing process with pid 1119999 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1119999 00:08:46.124 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1119999 00:08:46.385 [2024-07-25 09:57:25.331579] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.385 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.302 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:48.302 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:48.302 00:08:48.302 real 0m14.122s 00:08:48.302 user 0m22.616s 00:08:48.302 sys 0m6.325s 00:08:48.302 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:48.302 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:48.302 ************************************ 00:08:48.302 END TEST nvmf_host_management 00:08:48.302 ************************************ 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:48.564 ************************************ 00:08:48.564 START TEST nvmf_lvol 00:08:48.564 ************************************ 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:48.564 * Looking for test storage... 00:08:48.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:48.564 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:48.565 09:57:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.711 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:56.712 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:56.712 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:56.712 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:56.712 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:08:56.712 00:08:56.712 --- 10.0.0.2 ping statistics --- 00:08:56.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.712 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:08:56.712 00:08:56.712 --- 10.0.0.1 ping statistics --- 00:08:56.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.712 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1125144 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1125144 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1125144 ']' 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.712 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.712 [2024-07-25 09:57:34.890740] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:56.712 [2024-07-25 09:57:34.890837] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.712 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.712 [2024-07-25 09:57:34.965330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.712 [2024-07-25 09:57:35.040213] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.712 [2024-07-25 09:57:35.040252] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.712 [2024-07-25 09:57:35.040260] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.712 [2024-07-25 09:57:35.040266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.712 [2024-07-25 09:57:35.040272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.712 [2024-07-25 09:57:35.040451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.712 [2024-07-25 09:57:35.040566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.712 [2024-07-25 09:57:35.040568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.712 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.712 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:56.712 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.712 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.712 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:56.712 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.713 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:56.975 [2024-07-25 09:57:35.849028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.975 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.975 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:56.975 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.236 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:57.236 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:57.497 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:57.497 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=2b6af7c9-5c68-4d84-8ff9-49659374aa82 00:08:57.497 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b6af7c9-5c68-4d84-8ff9-49659374aa82 lvol 20 00:08:57.759 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=200daa62-235f-4358-8872-29866694834f 00:08:57.759 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:58.020 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 200daa62-235f-4358-8872-29866694834f 00:08:58.020 09:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:58.281 [2024-07-25 09:57:37.250027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.281 09:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.542 09:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:58.542 09:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1125691 00:08:58.542 09:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:58.542 EAL: No free 2048 kB hugepages reported on node 1 00:08:59.485 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 200daa62-235f-4358-8872-29866694834f MY_SNAPSHOT 00:08:59.746 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5b1e172e-3697-4095-a5d9-ca8bc40492c5 00:08:59.746 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 200daa62-235f-4358-8872-29866694834f 30 00:08:59.746 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5b1e172e-3697-4095-a5d9-ca8bc40492c5 MY_CLONE 00:09:00.008 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=68f08ae1-8063-4753-ba5c-645751dbe39b 00:09:00.008 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 68f08ae1-8063-4753-ba5c-645751dbe39b 00:09:00.300 09:57:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1125691 00:09:10.320 Initializing NVMe Controllers 00:09:10.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:10.320 Controller IO queue size 128, less than required. 00:09:10.320 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:10.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:10.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:10.320 Initialization complete. Launching workers. 00:09:10.320 ======================================================== 00:09:10.320 Latency(us) 00:09:10.320 Device Information : IOPS MiB/s Average min max 00:09:10.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17540.00 68.52 7298.32 1416.16 49387.88 00:09:10.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12148.90 47.46 10539.15 3603.26 49429.59 00:09:10.320 ======================================================== 00:09:10.320 Total : 29688.90 115.97 8624.49 1416.16 49429.59 00:09:10.320 00:09:10.320 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:10.320 09:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 200daa62-235f-4358-8872-29866694834f 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b6af7c9-5c68-4d84-8ff9-49659374aa82 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.320 rmmod nvme_tcp 00:09:10.320 rmmod nvme_fabrics 00:09:10.320 rmmod nvme_keyring 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1125144 ']' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1125144 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1125144 ']' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1125144 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1125144 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1125144' 00:09:10.320 killing process with pid 1125144 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1125144 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1125144 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.320 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.704 00:09:11.704 real 0m23.140s 00:09:11.704 user 1m3.565s 00:09:11.704 sys 0m7.933s 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:11.704 ************************************ 00:09:11.704 END TEST nvmf_lvol 00:09:11.704 ************************************ 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.704 ************************************ 00:09:11.704 START TEST nvmf_lvs_grow 00:09:11.704 ************************************ 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:11.704 * Looking for test storage... 00:09:11.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.704 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.705 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.965 09:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:18.555 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.555 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:18.556 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:18.556 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:18.556 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.556 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.677 ms 00:09:18.818 00:09:18.818 --- 10.0.0.2 ping statistics --- 00:09:18.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.818 rtt min/avg/max/mdev = 0.677/0.677/0.677/0.000 ms 00:09:18.818 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:09:18.819 00:09:18.819 --- 10.0.0.1 ping statistics --- 00:09:18.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.819 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.819 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1132124 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1132124 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1132124 ']' 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.080 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:19.080 [2024-07-25 09:57:58.033793] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:19.080 [2024-07-25 09:57:58.033861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.080 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.080 [2024-07-25 09:57:58.104483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.080 [2024-07-25 09:57:58.178000] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.080 [2024-07-25 09:57:58.178040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.080 [2024-07-25 09:57:58.178048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:19.080 [2024-07-25 09:57:58.178054] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:19.080 [2024-07-25 09:57:58.178059] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.080 [2024-07-25 09:57:58.178082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.024 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:20.024 [2024-07-25 09:57:58.981099] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:20.024 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:20.024 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:20.024 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:20.024 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:20.024 ************************************ 00:09:20.024 START TEST lvs_grow_clean 00:09:20.024 ************************************ 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.025 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:20.286 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:20.286 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:20.547 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:20.547 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:20.547 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:20.547 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:20.547 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:20.547 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b lvol 150 00:09:20.808 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=6b809fa8-19de-4482-99fc-6623d403198b 00:09:20.808 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:20.808 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:20.808 [2024-07-25 09:57:59.875256] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:20.808 [2024-07-25 09:57:59.875308] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:20.808 true 00:09:20.808 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:20.808 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:21.069 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:21.069 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:21.330 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6b809fa8-19de-4482-99fc-6623d403198b 00:09:21.330 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:21.591 [2024-07-25 09:58:00.509191] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1132586 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1132586 /var/tmp/bdevperf.sock 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1132586 ']' 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:21.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:21.591 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:21.591 [2024-07-25 09:58:00.710966] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:21.591 [2024-07-25 09:58:00.711015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1132586 ] 00:09:21.852 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.852 [2024-07-25 09:58:00.785498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.852 [2024-07-25 09:58:00.849858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:22.424 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:22.424 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:22.424 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:22.684 Nvme0n1 00:09:22.684 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:22.945 [ 00:09:22.945 { 00:09:22.945 "name": "Nvme0n1", 00:09:22.945 "aliases": [ 00:09:22.945 "6b809fa8-19de-4482-99fc-6623d403198b" 00:09:22.945 ], 00:09:22.945 "product_name": "NVMe disk", 00:09:22.945 "block_size": 4096, 00:09:22.945 "num_blocks": 38912, 00:09:22.945 "uuid": "6b809fa8-19de-4482-99fc-6623d403198b", 00:09:22.945 "assigned_rate_limits": { 00:09:22.945 "rw_ios_per_sec": 0, 00:09:22.945 "rw_mbytes_per_sec": 0, 00:09:22.945 "r_mbytes_per_sec": 0, 00:09:22.945 "w_mbytes_per_sec": 0 00:09:22.945 }, 00:09:22.945 "claimed": false, 00:09:22.945 "zoned": false, 00:09:22.945 "supported_io_types": { 00:09:22.945 "read": true, 00:09:22.945 "write": true, 00:09:22.945 "unmap": true, 00:09:22.945 "flush": true, 00:09:22.945 "reset": true, 00:09:22.945 "nvme_admin": true, 00:09:22.945 "nvme_io": true, 00:09:22.945 "nvme_io_md": false, 00:09:22.945 "write_zeroes": true, 00:09:22.945 "zcopy": false, 00:09:22.945 "get_zone_info": false, 00:09:22.945 "zone_management": false, 00:09:22.945 "zone_append": false, 00:09:22.945 "compare": true, 00:09:22.945 "compare_and_write": true, 00:09:22.945 "abort": true, 00:09:22.945 "seek_hole": false, 00:09:22.945 "seek_data": false, 00:09:22.945 "copy": true, 00:09:22.945 "nvme_iov_md": false 00:09:22.945 }, 00:09:22.945 "memory_domains": [ 00:09:22.945 { 00:09:22.945 "dma_device_id": "system", 00:09:22.945 "dma_device_type": 1 00:09:22.945 } 00:09:22.945 ], 00:09:22.945 "driver_specific": { 00:09:22.945 "nvme": [ 00:09:22.945 { 00:09:22.945 "trid": { 00:09:22.945 "trtype": "TCP", 00:09:22.945 "adrfam": "IPv4", 00:09:22.945 "traddr": "10.0.0.2", 00:09:22.945 "trsvcid": "4420", 00:09:22.945 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:22.945 }, 00:09:22.945 "ctrlr_data": { 00:09:22.945 "cntlid": 1, 00:09:22.945 "vendor_id": "0x8086", 00:09:22.945 "model_number": "SPDK bdev Controller", 00:09:22.945 "serial_number": "SPDK0", 00:09:22.945 "firmware_revision": "24.09", 00:09:22.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:22.945 "oacs": { 00:09:22.945 "security": 0, 00:09:22.945 "format": 0, 00:09:22.945 "firmware": 0, 00:09:22.945 "ns_manage": 0 00:09:22.945 }, 00:09:22.945 "multi_ctrlr": true, 00:09:22.945 "ana_reporting": false 00:09:22.945 }, 00:09:22.945 "vs": { 00:09:22.945 "nvme_version": "1.3" 00:09:22.945 }, 00:09:22.945 "ns_data": { 00:09:22.945 "id": 1, 00:09:22.945 "can_share": true 00:09:22.945 } 00:09:22.945 } 00:09:22.945 ], 00:09:22.945 "mp_policy": "active_passive" 00:09:22.945 } 00:09:22.945 } 00:09:22.945 ] 00:09:22.945 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1132916 00:09:22.945 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:22.945 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:22.945 Running I/O for 10 seconds... 00:09:23.885 Latency(us) 00:09:23.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.885 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.885 Nvme0n1 : 1.00 17971.00 70.20 0.00 0.00 0.00 0.00 0.00 00:09:23.885 =================================================================================================================== 00:09:23.885 Total : 17971.00 70.20 0.00 0.00 0.00 0.00 0.00 00:09:23.885 00:09:24.824 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:24.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.824 Nvme0n1 : 2.00 18137.00 70.85 0.00 0.00 0.00 0.00 0.00 00:09:24.824 =================================================================================================================== 00:09:24.824 Total : 18137.00 70.85 0.00 0.00 0.00 0.00 0.00 00:09:24.824 00:09:25.083 true 00:09:25.083 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:25.083 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:25.083 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:25.083 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:25.083 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1132916 00:09:26.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.024 Nvme0n1 : 3.00 18180.00 71.02 0.00 0.00 0.00 0.00 0.00 00:09:26.024 =================================================================================================================== 00:09:26.024 Total : 18180.00 71.02 0.00 0.00 0.00 0.00 0.00 00:09:26.024 00:09:27.027 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.027 Nvme0n1 : 4.00 18220.50 71.17 0.00 0.00 0.00 0.00 0.00 00:09:27.027 =================================================================================================================== 00:09:27.027 Total : 18220.50 71.17 0.00 0.00 0.00 0.00 0.00 00:09:27.027 00:09:27.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.971 Nvme0n1 : 5.00 18237.20 71.24 0.00 0.00 0.00 0.00 0.00 00:09:27.971 =================================================================================================================== 00:09:27.971 Total : 18237.20 71.24 0.00 0.00 0.00 0.00 0.00 00:09:27.971 00:09:28.914 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.914 Nvme0n1 : 6.00 18269.67 71.37 0.00 0.00 0.00 0.00 0.00 00:09:28.914 =================================================================================================================== 00:09:28.914 Total : 18269.67 71.37 0.00 0.00 0.00 0.00 0.00 00:09:28.914 00:09:29.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.857 Nvme0n1 : 7.00 18287.43 71.44 0.00 0.00 0.00 0.00 0.00 00:09:29.857 =================================================================================================================== 00:09:29.857 Total : 18287.43 71.44 0.00 0.00 0.00 0.00 0.00 00:09:29.857 00:09:31.245 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.245 Nvme0n1 : 8.00 18294.25 71.46 0.00 0.00 0.00 0.00 0.00 00:09:31.245 =================================================================================================================== 00:09:31.245 Total : 18294.25 71.46 0.00 0.00 0.00 0.00 0.00 00:09:31.245 00:09:31.817 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.817 Nvme0n1 : 9.00 18309.56 71.52 0.00 0.00 0.00 0.00 0.00 00:09:31.817 =================================================================================================================== 00:09:31.817 Total : 18309.56 71.52 0.00 0.00 0.00 0.00 0.00 00:09:31.817 00:09:33.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.203 Nvme0n1 : 10.00 18318.00 71.55 0.00 0.00 0.00 0.00 0.00 00:09:33.203 =================================================================================================================== 00:09:33.203 Total : 18318.00 71.55 0.00 0.00 0.00 0.00 0.00 00:09:33.203 00:09:33.203 00:09:33.203 Latency(us) 00:09:33.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.203 Nvme0n1 : 10.01 18320.35 71.56 0.00 0.00 6984.59 5133.65 16274.77 00:09:33.203 =================================================================================================================== 00:09:33.203 Total : 18320.35 71.56 0.00 0.00 6984.59 5133.65 16274.77 00:09:33.203 0 00:09:33.204 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1132586 00:09:33.204 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1132586 ']' 00:09:33.204 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1132586 00:09:33.204 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:33.204 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:33.204 09:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1132586 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1132586' 00:09:33.204 killing process with pid 1132586 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1132586 00:09:33.204 Received shutdown signal, test time was about 10.000000 seconds 00:09:33.204 00:09:33.204 Latency(us) 00:09:33.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.204 =================================================================================================================== 00:09:33.204 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1132586 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:33.204 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:33.465 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:33.465 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:33.727 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:33.728 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:33.728 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:33.728 [2024-07-25 09:58:12.805749] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:33.990 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:33.990 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:33.991 09:58:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:33.991 request: 00:09:33.991 { 00:09:33.991 "uuid": "6be02ae5-96b1-4663-9fda-fd094ef1df8b", 00:09:33.991 "method": "bdev_lvol_get_lvstores", 00:09:33.991 "req_id": 1 00:09:33.991 } 00:09:33.991 Got JSON-RPC error response 00:09:33.991 response: 00:09:33.991 { 00:09:33.991 "code": -19, 00:09:33.991 "message": "No such device" 00:09:33.991 } 00:09:33.991 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:33.991 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:33.991 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:33.991 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:33.991 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:34.252 aio_bdev 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6b809fa8-19de-4482-99fc-6623d403198b 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=6b809fa8-19de-4482-99fc-6623d403198b 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:34.252 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:34.514 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6b809fa8-19de-4482-99fc-6623d403198b -t 2000 00:09:34.514 [ 00:09:34.514 { 00:09:34.514 "name": "6b809fa8-19de-4482-99fc-6623d403198b", 00:09:34.514 "aliases": [ 00:09:34.514 "lvs/lvol" 00:09:34.514 ], 00:09:34.514 "product_name": "Logical Volume", 00:09:34.514 "block_size": 4096, 00:09:34.514 "num_blocks": 38912, 00:09:34.514 "uuid": "6b809fa8-19de-4482-99fc-6623d403198b", 00:09:34.514 "assigned_rate_limits": { 00:09:34.514 "rw_ios_per_sec": 0, 00:09:34.514 "rw_mbytes_per_sec": 0, 00:09:34.514 "r_mbytes_per_sec": 0, 00:09:34.514 "w_mbytes_per_sec": 0 00:09:34.514 }, 00:09:34.514 "claimed": false, 00:09:34.514 "zoned": false, 00:09:34.514 "supported_io_types": { 00:09:34.514 "read": true, 00:09:34.514 "write": true, 00:09:34.514 "unmap": true, 00:09:34.514 "flush": false, 00:09:34.514 "reset": true, 00:09:34.514 "nvme_admin": false, 00:09:34.514 "nvme_io": false, 00:09:34.514 "nvme_io_md": false, 00:09:34.514 "write_zeroes": true, 00:09:34.514 "zcopy": false, 00:09:34.514 "get_zone_info": false, 00:09:34.514 "zone_management": false, 00:09:34.514 "zone_append": false, 00:09:34.514 "compare": false, 00:09:34.514 "compare_and_write": false, 00:09:34.514 "abort": false, 00:09:34.514 "seek_hole": true, 00:09:34.514 "seek_data": true, 00:09:34.514 "copy": false, 00:09:34.514 "nvme_iov_md": false 00:09:34.514 }, 00:09:34.514 "driver_specific": { 00:09:34.514 "lvol": { 00:09:34.514 "lvol_store_uuid": "6be02ae5-96b1-4663-9fda-fd094ef1df8b", 00:09:34.514 "base_bdev": "aio_bdev", 00:09:34.514 "thin_provision": false, 00:09:34.514 "num_allocated_clusters": 38, 00:09:34.514 "snapshot": false, 00:09:34.514 "clone": false, 00:09:34.514 "esnap_clone": false 00:09:34.514 } 00:09:34.514 } 00:09:34.514 } 00:09:34.514 ] 00:09:34.514 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:34.514 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:34.514 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:34.774 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:34.774 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:34.774 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:34.774 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:34.774 09:58:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6b809fa8-19de-4482-99fc-6623d403198b 00:09:35.036 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6be02ae5-96b1-4663-9fda-fd094ef1df8b 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.297 00:09:35.297 real 0m15.320s 00:09:35.297 user 0m15.003s 00:09:35.297 sys 0m1.291s 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:35.297 ************************************ 00:09:35.297 END TEST lvs_grow_clean 00:09:35.297 ************************************ 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:35.297 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:35.559 ************************************ 00:09:35.559 START TEST lvs_grow_dirty 00:09:35.559 ************************************ 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:35.559 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:35.821 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:35.821 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:35.821 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:36.082 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:36.082 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:36.082 09:58:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f30c4fa-695e-4874-9f79-53a88693dde5 lvol 150 00:09:36.082 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:36.082 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:36.082 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:36.344 [2024-07-25 09:58:15.262251] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:36.344 [2024-07-25 09:58:15.262303] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:36.344 true 00:09:36.344 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:36.344 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:36.344 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:36.344 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:36.606 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:36.606 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:36.867 [2024-07-25 09:58:15.864092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:36.867 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1135722 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1135722 /var/tmp/bdevperf.sock 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1135722 ']' 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:37.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:37.128 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:37.128 [2024-07-25 09:58:16.078127] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:37.128 [2024-07-25 09:58:16.078179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135722 ] 00:09:37.128 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.128 [2024-07-25 09:58:16.151486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.129 [2024-07-25 09:58:16.205150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.073 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.073 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:38.073 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:38.334 Nvme0n1 00:09:38.334 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:38.334 [ 00:09:38.334 { 00:09:38.334 "name": "Nvme0n1", 00:09:38.334 "aliases": [ 00:09:38.334 "54142bc9-648c-4e75-8867-7bb4e12832eb" 00:09:38.334 ], 00:09:38.334 "product_name": "NVMe disk", 00:09:38.334 "block_size": 4096, 00:09:38.334 "num_blocks": 38912, 00:09:38.334 "uuid": "54142bc9-648c-4e75-8867-7bb4e12832eb", 00:09:38.334 "assigned_rate_limits": { 00:09:38.334 "rw_ios_per_sec": 0, 00:09:38.334 "rw_mbytes_per_sec": 0, 00:09:38.334 "r_mbytes_per_sec": 0, 00:09:38.334 "w_mbytes_per_sec": 0 00:09:38.334 }, 00:09:38.334 "claimed": false, 00:09:38.334 "zoned": false, 00:09:38.334 "supported_io_types": { 00:09:38.334 "read": true, 00:09:38.334 "write": true, 00:09:38.334 "unmap": true, 00:09:38.334 "flush": true, 00:09:38.334 "reset": true, 00:09:38.334 "nvme_admin": true, 00:09:38.334 "nvme_io": true, 00:09:38.334 "nvme_io_md": false, 00:09:38.334 "write_zeroes": true, 00:09:38.334 "zcopy": false, 00:09:38.334 "get_zone_info": false, 00:09:38.334 "zone_management": false, 00:09:38.334 "zone_append": false, 00:09:38.334 "compare": true, 00:09:38.334 "compare_and_write": true, 00:09:38.334 "abort": true, 00:09:38.334 "seek_hole": false, 00:09:38.334 "seek_data": false, 00:09:38.334 "copy": true, 00:09:38.334 "nvme_iov_md": false 00:09:38.334 }, 00:09:38.334 "memory_domains": [ 00:09:38.334 { 00:09:38.334 "dma_device_id": "system", 00:09:38.334 "dma_device_type": 1 00:09:38.334 } 00:09:38.334 ], 00:09:38.334 "driver_specific": { 00:09:38.334 "nvme": [ 00:09:38.334 { 00:09:38.334 "trid": { 00:09:38.334 "trtype": "TCP", 00:09:38.334 "adrfam": "IPv4", 00:09:38.334 "traddr": "10.0.0.2", 00:09:38.334 "trsvcid": "4420", 00:09:38.334 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:38.334 }, 00:09:38.334 "ctrlr_data": { 00:09:38.334 "cntlid": 1, 00:09:38.334 "vendor_id": "0x8086", 00:09:38.334 "model_number": "SPDK bdev Controller", 00:09:38.334 "serial_number": "SPDK0", 00:09:38.334 "firmware_revision": "24.09", 00:09:38.334 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:38.334 "oacs": { 00:09:38.334 "security": 0, 00:09:38.334 "format": 0, 00:09:38.334 "firmware": 0, 00:09:38.334 "ns_manage": 0 00:09:38.334 }, 00:09:38.334 "multi_ctrlr": true, 00:09:38.334 "ana_reporting": false 00:09:38.334 }, 00:09:38.334 "vs": { 00:09:38.334 "nvme_version": "1.3" 00:09:38.334 }, 00:09:38.334 "ns_data": { 00:09:38.334 "id": 1, 00:09:38.334 "can_share": true 00:09:38.334 } 00:09:38.334 } 00:09:38.334 ], 00:09:38.334 "mp_policy": "active_passive" 00:09:38.334 } 00:09:38.334 } 00:09:38.334 ] 00:09:38.334 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1136017 00:09:38.334 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:38.334 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:38.335 Running I/O for 10 seconds... 00:09:39.722 Latency(us) 00:09:39.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:39.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.722 Nvme0n1 : 1.00 17957.00 70.14 0.00 0.00 0.00 0.00 0.00 00:09:39.722 =================================================================================================================== 00:09:39.722 Total : 17957.00 70.14 0.00 0.00 0.00 0.00 0.00 00:09:39.722 00:09:40.294 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:40.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.555 Nvme0n1 : 2.00 18101.00 70.71 0.00 0.00 0.00 0.00 0.00 00:09:40.555 =================================================================================================================== 00:09:40.555 Total : 18101.00 70.71 0.00 0.00 0.00 0.00 0.00 00:09:40.555 00:09:40.555 true 00:09:40.555 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:40.555 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:40.816 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:40.816 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:40.816 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1136017 00:09:41.388 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.388 Nvme0n1 : 3.00 18147.33 70.89 0.00 0.00 0.00 0.00 0.00 00:09:41.388 =================================================================================================================== 00:09:41.388 Total : 18147.33 70.89 0.00 0.00 0.00 0.00 0.00 00:09:41.388 00:09:42.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.809 Nvme0n1 : 4.00 18186.50 71.04 0.00 0.00 0.00 0.00 0.00 00:09:42.809 =================================================================================================================== 00:09:42.809 Total : 18186.50 71.04 0.00 0.00 0.00 0.00 0.00 00:09:42.809 00:09:43.381 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.381 Nvme0n1 : 5.00 18235.40 71.23 0.00 0.00 0.00 0.00 0.00 00:09:43.381 =================================================================================================================== 00:09:43.381 Total : 18235.40 71.23 0.00 0.00 0.00 0.00 0.00 00:09:43.381 00:09:44.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:44.776 Nvme0n1 : 6.00 18257.50 71.32 0.00 0.00 0.00 0.00 0.00 00:09:44.776 =================================================================================================================== 00:09:44.776 Total : 18257.50 71.32 0.00 0.00 0.00 0.00 0.00 00:09:44.776 00:09:45.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:45.348 Nvme0n1 : 7.00 18273.29 71.38 0.00 0.00 0.00 0.00 0.00 00:09:45.348 =================================================================================================================== 00:09:45.348 Total : 18273.29 71.38 0.00 0.00 0.00 0.00 0.00 00:09:45.348 00:09:46.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:46.734 Nvme0n1 : 8.00 18293.00 71.46 0.00 0.00 0.00 0.00 0.00 00:09:46.734 =================================================================================================================== 00:09:46.734 Total : 18293.00 71.46 0.00 0.00 0.00 0.00 0.00 00:09:46.734 00:09:47.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:47.676 Nvme0n1 : 9.00 18308.44 71.52 0.00 0.00 0.00 0.00 0.00 00:09:47.676 =================================================================================================================== 00:09:47.676 Total : 18308.44 71.52 0.00 0.00 0.00 0.00 0.00 00:09:47.676 00:09:48.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.620 Nvme0n1 : 10.00 18320.80 71.57 0.00 0.00 0.00 0.00 0.00 00:09:48.620 =================================================================================================================== 00:09:48.620 Total : 18320.80 71.57 0.00 0.00 0.00 0.00 0.00 00:09:48.620 00:09:48.620 00:09:48.620 Latency(us) 00:09:48.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.620 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:48.620 Nvme0n1 : 10.01 18319.39 71.56 0.00 0.00 6984.61 5270.19 18568.53 00:09:48.620 =================================================================================================================== 00:09:48.620 Total : 18319.39 71.56 0.00 0.00 6984.61 5270.19 18568.53 00:09:48.620 0 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1135722 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1135722 ']' 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1135722 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1135722 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1135722' 00:09:48.620 killing process with pid 1135722 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1135722 00:09:48.620 Received shutdown signal, test time was about 10.000000 seconds 00:09:48.620 00:09:48.620 Latency(us) 00:09:48.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:48.620 =================================================================================================================== 00:09:48.620 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1135722 00:09:48.620 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.882 09:58:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1132124 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1132124 00:09:49.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1132124 Killed "${NVMF_APP[@]}" "$@" 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1138338 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1138338 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1138338 ']' 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:49.144 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.405 [2024-07-25 09:58:28.320346] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:49.405 [2024-07-25 09:58:28.320402] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:49.405 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.405 [2024-07-25 09:58:28.385019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.405 [2024-07-25 09:58:28.449295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:49.405 [2024-07-25 09:58:28.449331] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:49.405 [2024-07-25 09:58:28.449339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:49.405 [2024-07-25 09:58:28.449349] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:49.405 [2024-07-25 09:58:28.449354] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:49.405 [2024-07-25 09:58:28.449372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.977 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.977 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:49.977 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.977 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:49.977 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:50.238 [2024-07-25 09:58:29.262072] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:50.238 [2024-07-25 09:58:29.262159] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:50.238 [2024-07-25 09:58:29.262188] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:50.238 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:50.499 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 54142bc9-648c-4e75-8867-7bb4e12832eb -t 2000 00:09:50.499 [ 00:09:50.499 { 00:09:50.499 "name": "54142bc9-648c-4e75-8867-7bb4e12832eb", 00:09:50.499 "aliases": [ 00:09:50.499 "lvs/lvol" 00:09:50.499 ], 00:09:50.499 "product_name": "Logical Volume", 00:09:50.499 "block_size": 4096, 00:09:50.499 "num_blocks": 38912, 00:09:50.499 "uuid": "54142bc9-648c-4e75-8867-7bb4e12832eb", 00:09:50.499 "assigned_rate_limits": { 00:09:50.499 "rw_ios_per_sec": 0, 00:09:50.499 "rw_mbytes_per_sec": 0, 00:09:50.499 "r_mbytes_per_sec": 0, 00:09:50.499 "w_mbytes_per_sec": 0 00:09:50.499 }, 00:09:50.499 "claimed": false, 00:09:50.499 "zoned": false, 00:09:50.499 "supported_io_types": { 00:09:50.499 "read": true, 00:09:50.499 "write": true, 00:09:50.499 "unmap": true, 00:09:50.499 "flush": false, 00:09:50.499 "reset": true, 00:09:50.499 "nvme_admin": false, 00:09:50.499 "nvme_io": false, 00:09:50.499 "nvme_io_md": false, 00:09:50.499 "write_zeroes": true, 00:09:50.499 "zcopy": false, 00:09:50.499 "get_zone_info": false, 00:09:50.499 "zone_management": false, 00:09:50.499 "zone_append": false, 00:09:50.499 "compare": false, 00:09:50.499 "compare_and_write": false, 00:09:50.499 "abort": false, 00:09:50.500 "seek_hole": true, 00:09:50.500 "seek_data": true, 00:09:50.500 "copy": false, 00:09:50.500 "nvme_iov_md": false 00:09:50.500 }, 00:09:50.500 "driver_specific": { 00:09:50.500 "lvol": { 00:09:50.500 "lvol_store_uuid": "5f30c4fa-695e-4874-9f79-53a88693dde5", 00:09:50.500 "base_bdev": "aio_bdev", 00:09:50.500 "thin_provision": false, 00:09:50.500 "num_allocated_clusters": 38, 00:09:50.500 "snapshot": false, 00:09:50.500 "clone": false, 00:09:50.500 "esnap_clone": false 00:09:50.500 } 00:09:50.500 } 00:09:50.500 } 00:09:50.500 ] 00:09:50.500 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:50.500 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:50.500 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:50.761 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:50.761 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:50.761 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:51.022 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:51.022 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:51.022 [2024-07-25 09:58:30.078122] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:51.022 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:51.283 request: 00:09:51.283 { 00:09:51.283 "uuid": "5f30c4fa-695e-4874-9f79-53a88693dde5", 00:09:51.283 "method": "bdev_lvol_get_lvstores", 00:09:51.283 "req_id": 1 00:09:51.283 } 00:09:51.283 Got JSON-RPC error response 00:09:51.283 response: 00:09:51.283 { 00:09:51.283 "code": -19, 00:09:51.283 "message": "No such device" 00:09:51.283 } 00:09:51.283 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:51.283 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:51.283 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:51.283 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:51.283 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:51.543 aio_bdev 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:51.543 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 54142bc9-648c-4e75-8867-7bb4e12832eb -t 2000 00:09:51.802 [ 00:09:51.802 { 00:09:51.802 "name": "54142bc9-648c-4e75-8867-7bb4e12832eb", 00:09:51.802 "aliases": [ 00:09:51.802 "lvs/lvol" 00:09:51.802 ], 00:09:51.802 "product_name": "Logical Volume", 00:09:51.802 "block_size": 4096, 00:09:51.802 "num_blocks": 38912, 00:09:51.802 "uuid": "54142bc9-648c-4e75-8867-7bb4e12832eb", 00:09:51.802 "assigned_rate_limits": { 00:09:51.802 "rw_ios_per_sec": 0, 00:09:51.802 "rw_mbytes_per_sec": 0, 00:09:51.802 "r_mbytes_per_sec": 0, 00:09:51.802 "w_mbytes_per_sec": 0 00:09:51.802 }, 00:09:51.802 "claimed": false, 00:09:51.802 "zoned": false, 00:09:51.802 "supported_io_types": { 00:09:51.802 "read": true, 00:09:51.802 "write": true, 00:09:51.802 "unmap": true, 00:09:51.802 "flush": false, 00:09:51.802 "reset": true, 00:09:51.802 "nvme_admin": false, 00:09:51.802 "nvme_io": false, 00:09:51.802 "nvme_io_md": false, 00:09:51.802 "write_zeroes": true, 00:09:51.802 "zcopy": false, 00:09:51.802 "get_zone_info": false, 00:09:51.802 "zone_management": false, 00:09:51.802 "zone_append": false, 00:09:51.802 "compare": false, 00:09:51.803 "compare_and_write": false, 00:09:51.803 "abort": false, 00:09:51.803 "seek_hole": true, 00:09:51.803 "seek_data": true, 00:09:51.803 "copy": false, 00:09:51.803 "nvme_iov_md": false 00:09:51.803 }, 00:09:51.803 "driver_specific": { 00:09:51.803 "lvol": { 00:09:51.803 "lvol_store_uuid": "5f30c4fa-695e-4874-9f79-53a88693dde5", 00:09:51.803 "base_bdev": "aio_bdev", 00:09:51.803 "thin_provision": false, 00:09:51.803 "num_allocated_clusters": 38, 00:09:51.803 "snapshot": false, 00:09:51.803 "clone": false, 00:09:51.803 "esnap_clone": false 00:09:51.803 } 00:09:51.803 } 00:09:51.803 } 00:09:51.803 ] 00:09:51.803 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:51.803 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:51.803 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:52.063 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:52.063 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:52.063 09:58:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:52.063 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:52.063 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 54142bc9-648c-4e75-8867-7bb4e12832eb 00:09:52.324 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f30c4fa-695e-4874-9f79-53a88693dde5 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:52.584 00:09:52.584 real 0m17.213s 00:09:52.584 user 0m44.742s 00:09:52.584 sys 0m2.860s 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:52.584 ************************************ 00:09:52.584 END TEST lvs_grow_dirty 00:09:52.584 ************************************ 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:52.584 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:52.584 nvmf_trace.0 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.844 rmmod nvme_tcp 00:09:52.844 rmmod nvme_fabrics 00:09:52.844 rmmod nvme_keyring 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1138338 ']' 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1138338 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1138338 ']' 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1138338 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1138338 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1138338' 00:09:52.844 killing process with pid 1138338 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1138338 00:09:52.844 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1138338 00:09:53.103 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:53.103 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:53.103 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:53.103 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.103 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.103 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.104 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.104 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.017 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:55.018 00:09:55.018 real 0m43.377s 00:09:55.018 user 1m5.856s 00:09:55.018 sys 0m9.828s 00:09:55.018 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.018 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:55.018 ************************************ 00:09:55.018 END TEST nvmf_lvs_grow 00:09:55.018 ************************************ 00:09:55.018 09:58:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:55.018 09:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:55.018 09:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.018 09:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:55.280 ************************************ 00:09:55.280 START TEST nvmf_bdev_io_wait 00:09:55.280 ************************************ 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:55.280 * Looking for test storage... 00:09:55.280 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:55.280 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:01.875 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.875 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:01.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:01.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:01.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:01.876 09:58:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.876 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:02.138 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:02.138 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:02.138 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:02.138 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:02.138 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:02.138 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:02.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:02.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.505 ms 00:10:02.423 00:10:02.423 --- 10.0.0.2 ping statistics --- 00:10:02.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.423 rtt min/avg/max/mdev = 0.505/0.505/0.505/0.000 ms 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:02.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:02.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:10:02.423 00:10:02.423 --- 10.0.0.1 ping statistics --- 00:10:02.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:02.423 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1143099 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1143099 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1143099 ']' 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:02.423 09:58:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:02.423 [2024-07-25 09:58:41.394638] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:02.423 [2024-07-25 09:58:41.394703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:02.423 EAL: No free 2048 kB hugepages reported on node 1 00:10:02.423 [2024-07-25 09:58:41.466705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:02.423 [2024-07-25 09:58:41.545199] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:02.423 [2024-07-25 09:58:41.545248] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:02.423 [2024-07-25 09:58:41.545256] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:02.423 [2024-07-25 09:58:41.545263] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:02.423 [2024-07-25 09:58:41.545269] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:02.423 [2024-07-25 09:58:41.545352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.423 [2024-07-25 09:58:41.545459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:02.423 [2024-07-25 09:58:41.545500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.423 [2024-07-25 09:58:41.545502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 [2024-07-25 09:58:42.285634] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 Malloc0 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:03.367 [2024-07-25 09:58:42.353326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1143452 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1143454 00:10:03.367 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.368 { 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme$subsystem", 00:10:03.368 "trtype": "$TEST_TRANSPORT", 00:10:03.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "$NVMF_PORT", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.368 "hdgst": ${hdgst:-false}, 00:10:03.368 "ddgst": ${ddgst:-false} 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 } 00:10:03.368 EOF 00:10:03.368 )") 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1143456 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.368 { 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme$subsystem", 00:10:03.368 "trtype": "$TEST_TRANSPORT", 00:10:03.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "$NVMF_PORT", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.368 "hdgst": ${hdgst:-false}, 00:10:03.368 "ddgst": ${ddgst:-false} 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 } 00:10:03.368 EOF 00:10:03.368 )") 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1143459 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.368 { 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme$subsystem", 00:10:03.368 "trtype": "$TEST_TRANSPORT", 00:10:03.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "$NVMF_PORT", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.368 "hdgst": ${hdgst:-false}, 00:10:03.368 "ddgst": ${ddgst:-false} 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 } 00:10:03.368 EOF 00:10:03.368 )") 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.368 { 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme$subsystem", 00:10:03.368 "trtype": "$TEST_TRANSPORT", 00:10:03.368 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "$NVMF_PORT", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.368 "hdgst": ${hdgst:-false}, 00:10:03.368 "ddgst": ${ddgst:-false} 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 } 00:10:03.368 EOF 00:10:03.368 )") 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1143452 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme1", 00:10:03.368 "trtype": "tcp", 00:10:03.368 "traddr": "10.0.0.2", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "4420", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.368 "hdgst": false, 00:10:03.368 "ddgst": false 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 }' 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme1", 00:10:03.368 "trtype": "tcp", 00:10:03.368 "traddr": "10.0.0.2", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "4420", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.368 "hdgst": false, 00:10:03.368 "ddgst": false 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 }' 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme1", 00:10:03.368 "trtype": "tcp", 00:10:03.368 "traddr": "10.0.0.2", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "4420", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.368 "hdgst": false, 00:10:03.368 "ddgst": false 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 }' 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:10:03.368 09:58:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.368 "params": { 00:10:03.368 "name": "Nvme1", 00:10:03.368 "trtype": "tcp", 00:10:03.368 "traddr": "10.0.0.2", 00:10:03.368 "adrfam": "ipv4", 00:10:03.368 "trsvcid": "4420", 00:10:03.368 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.368 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.368 "hdgst": false, 00:10:03.368 "ddgst": false 00:10:03.368 }, 00:10:03.368 "method": "bdev_nvme_attach_controller" 00:10:03.368 }' 00:10:03.368 [2024-07-25 09:58:42.406576] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:03.368 [2024-07-25 09:58:42.406632] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:03.368 [2024-07-25 09:58:42.406635] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:03.368 [2024-07-25 09:58:42.406682] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:03.368 [2024-07-25 09:58:42.408320] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:03.368 [2024-07-25 09:58:42.408366] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:03.368 [2024-07-25 09:58:42.408763] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:03.368 [2024-07-25 09:58:42.408808] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:03.368 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.629 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.629 [2024-07-25 09:58:42.553915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.629 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.629 [2024-07-25 09:58:42.604490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:03.629 [2024-07-25 09:58:42.614387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.629 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.629 [2024-07-25 09:58:42.664817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:03.629 [2024-07-25 09:58:42.673008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.629 [2024-07-25 09:58:42.713244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.629 [2024-07-25 09:58:42.724789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:10:03.629 [2024-07-25 09:58:42.762970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:03.890 Running I/O for 1 seconds... 00:10:03.890 Running I/O for 1 seconds... 00:10:03.890 Running I/O for 1 seconds... 00:10:04.151 Running I/O for 1 seconds... 00:10:05.094 00:10:05.094 Latency(us) 00:10:05.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.094 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:05.094 Nvme1n1 : 1.01 10117.77 39.52 0.00 0.00 12581.00 6417.07 22500.69 00:10:05.094 =================================================================================================================== 00:10:05.094 Total : 10117.77 39.52 0.00 0.00 12581.00 6417.07 22500.69 00:10:05.094 00:10:05.094 Latency(us) 00:10:05.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.094 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:05.094 Nvme1n1 : 1.01 14363.08 56.11 0.00 0.00 8880.45 4805.97 16493.23 00:10:05.094 =================================================================================================================== 00:10:05.094 Total : 14363.08 56.11 0.00 0.00 8880.45 4805.97 16493.23 00:10:05.094 00:10:05.094 Latency(us) 00:10:05.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.094 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:05.094 Nvme1n1 : 1.00 10465.34 40.88 0.00 0.00 12204.43 3577.17 27852.80 00:10:05.094 =================================================================================================================== 00:10:05.094 Total : 10465.34 40.88 0.00 0.00 12204.43 3577.17 27852.80 00:10:05.094 00:10:05.094 Latency(us) 00:10:05.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.094 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:05.094 Nvme1n1 : 1.00 187011.41 730.51 0.00 0.00 681.70 271.36 771.41 00:10:05.094 =================================================================================================================== 00:10:05.094 Total : 187011.41 730.51 0.00 0.00 681.70 271.36 771.41 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1143454 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1143456 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1143459 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.094 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.094 rmmod nvme_tcp 00:10:05.355 rmmod nvme_fabrics 00:10:05.355 rmmod nvme_keyring 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1143099 ']' 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1143099 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1143099 ']' 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1143099 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1143099 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1143099' 00:10:05.355 killing process with pid 1143099 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1143099 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1143099 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.355 09:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:07.904 00:10:07.904 real 0m12.378s 00:10:07.904 user 0m19.315s 00:10:07.904 sys 0m6.563s 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:07.904 ************************************ 00:10:07.904 END TEST nvmf_bdev_io_wait 00:10:07.904 ************************************ 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.904 ************************************ 00:10:07.904 START TEST nvmf_queue_depth 00:10:07.904 ************************************ 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:07.904 * Looking for test storage... 00:10:07.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.904 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.905 09:58:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:16.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:16.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:16.051 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:16.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:16.052 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:16.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:16.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:10:16.052 00:10:16.052 --- 10.0.0.2 ping statistics --- 00:10:16.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.052 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:16.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:16.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.454 ms 00:10:16.052 00:10:16.052 --- 10.0.0.1 ping statistics --- 00:10:16.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:16.052 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:16.052 09:58:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1148088 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1148088 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1148088 ']' 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.052 [2024-07-25 09:58:54.060869] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:16.052 [2024-07-25 09:58:54.060935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:16.052 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.052 [2024-07-25 09:58:54.150264] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.052 [2024-07-25 09:58:54.245292] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:16.052 [2024-07-25 09:58:54.245357] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:16.052 [2024-07-25 09:58:54.245365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:16.052 [2024-07-25 09:58:54.245371] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:16.052 [2024-07-25 09:58:54.245378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:16.052 [2024-07-25 09:58:54.245406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.052 [2024-07-25 09:58:54.901272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.052 Malloc0 00:10:16.052 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.053 [2024-07-25 09:58:54.978486] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1148170 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1148170 /var/tmp/bdevperf.sock 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1148170 ']' 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:16.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:16.053 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.053 [2024-07-25 09:58:55.035149] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:16.053 [2024-07-25 09:58:55.035217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148170 ] 00:10:16.053 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.053 [2024-07-25 09:58:55.098689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.053 [2024-07-25 09:58:55.173308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:16.996 NVMe0n1 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.996 09:58:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:16.996 Running I/O for 10 seconds... 00:10:27.003 00:10:27.003 Latency(us) 00:10:27.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.003 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:27.003 Verification LBA range: start 0x0 length 0x4000 00:10:27.003 NVMe0n1 : 10.05 11738.50 45.85 0.00 0.00 86902.50 10267.31 72526.51 00:10:27.003 =================================================================================================================== 00:10:27.003 Total : 11738.50 45.85 0.00 0.00 86902.50 10267.31 72526.51 00:10:27.003 0 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1148170 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1148170 ']' 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1148170 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148170 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148170' 00:10:27.003 killing process with pid 1148170 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1148170 00:10:27.003 Received shutdown signal, test time was about 10.000000 seconds 00:10:27.003 00:10:27.003 Latency(us) 00:10:27.003 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:27.003 =================================================================================================================== 00:10:27.003 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:27.003 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1148170 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.264 rmmod nvme_tcp 00:10:27.264 rmmod nvme_fabrics 00:10:27.264 rmmod nvme_keyring 00:10:27.264 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1148088 ']' 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1148088 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1148088 ']' 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1148088 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1148088 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1148088' 00:10:27.265 killing process with pid 1148088 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1148088 00:10:27.265 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1148088 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.526 09:59:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.441 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:29.441 00:10:29.441 real 0m21.931s 00:10:29.441 user 0m25.400s 00:10:29.441 sys 0m6.571s 00:10:29.441 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:29.441 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:29.441 ************************************ 00:10:29.441 END TEST nvmf_queue_depth 00:10:29.441 ************************************ 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:29.705 ************************************ 00:10:29.705 START TEST nvmf_target_multipath 00:10:29.705 ************************************ 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:29.705 * Looking for test storage... 00:10:29.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:29.705 09:59:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.888 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:37.889 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:37.889 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:37.889 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:37.889 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.750 ms 00:10:37.889 00:10:37.889 --- 10.0.0.2 ping statistics --- 00:10:37.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.889 rtt min/avg/max/mdev = 0.750/0.750/0.750/0.000 ms 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:10:37.889 00:10:37.889 --- 10.0.0.1 ping statistics --- 00:10:37.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.889 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:37.889 09:59:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:37.889 only one NIC for nvmf test 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:37.889 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:37.889 rmmod nvme_tcp 00:10:37.889 rmmod nvme_fabrics 00:10:37.889 rmmod nvme_keyring 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.890 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.278 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.278 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:39.278 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.279 00:10:39.279 real 0m9.541s 00:10:39.279 user 0m1.953s 00:10:39.279 sys 0m5.494s 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:39.279 ************************************ 00:10:39.279 END TEST nvmf_target_multipath 00:10:39.279 ************************************ 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:39.279 ************************************ 00:10:39.279 START TEST nvmf_zcopy 00:10:39.279 ************************************ 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:39.279 * Looking for test storage... 00:10:39.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.279 09:59:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:47.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:47.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:47.429 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:47.429 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.617 ms 00:10:47.429 00:10:47.429 --- 10.0.0.2 ping statistics --- 00:10:47.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.429 rtt min/avg/max/mdev = 0.617/0.617/0.617/0.000 ms 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:10:47.429 00:10:47.429 --- 10.0.0.1 ping statistics --- 00:10:47.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.429 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.429 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1158829 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1158829 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1158829 ']' 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.430 09:59:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 [2024-07-25 09:59:25.630465] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:47.430 [2024-07-25 09:59:25.630533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.430 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.430 [2024-07-25 09:59:25.695762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.430 [2024-07-25 09:59:25.778452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.430 [2024-07-25 09:59:25.778510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.430 [2024-07-25 09:59:25.778516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.430 [2024-07-25 09:59:25.778521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.430 [2024-07-25 09:59:25.778525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.430 [2024-07-25 09:59:25.778547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 [2024-07-25 09:59:26.489144] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 [2024-07-25 09:59:26.513349] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.430 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.692 malloc0 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:47.692 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:47.693 { 00:10:47.693 "params": { 00:10:47.693 "name": "Nvme$subsystem", 00:10:47.693 "trtype": "$TEST_TRANSPORT", 00:10:47.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:47.693 "adrfam": "ipv4", 00:10:47.693 "trsvcid": "$NVMF_PORT", 00:10:47.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:47.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:47.693 "hdgst": ${hdgst:-false}, 00:10:47.693 "ddgst": ${ddgst:-false} 00:10:47.693 }, 00:10:47.693 "method": "bdev_nvme_attach_controller" 00:10:47.693 } 00:10:47.693 EOF 00:10:47.693 )") 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:47.693 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:47.693 "params": { 00:10:47.693 "name": "Nvme1", 00:10:47.693 "trtype": "tcp", 00:10:47.693 "traddr": "10.0.0.2", 00:10:47.693 "adrfam": "ipv4", 00:10:47.693 "trsvcid": "4420", 00:10:47.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:47.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:47.693 "hdgst": false, 00:10:47.693 "ddgst": false 00:10:47.693 }, 00:10:47.693 "method": "bdev_nvme_attach_controller" 00:10:47.693 }' 00:10:47.693 [2024-07-25 09:59:26.612763] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:47.693 [2024-07-25 09:59:26.612813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159032 ] 00:10:47.693 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.693 [2024-07-25 09:59:26.670013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:47.693 [2024-07-25 09:59:26.734730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:47.953 Running I/O for 10 seconds... 00:10:57.995 00:10:57.995 Latency(us) 00:10:57.995 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.995 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:57.995 Verification LBA range: start 0x0 length 0x1000 00:10:57.995 Nvme1n1 : 10.01 9093.84 71.05 0.00 0.00 14023.71 1665.71 38884.69 00:10:57.995 =================================================================================================================== 00:10:57.995 Total : 9093.84 71.05 0.00 0.00 14023.71 1665.71 38884.69 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1161195 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:57.995 { 00:10:57.995 "params": { 00:10:57.995 "name": "Nvme$subsystem", 00:10:57.995 "trtype": "$TEST_TRANSPORT", 00:10:57.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:57.995 "adrfam": "ipv4", 00:10:57.995 "trsvcid": "$NVMF_PORT", 00:10:57.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:57.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:57.995 "hdgst": ${hdgst:-false}, 00:10:57.995 "ddgst": ${ddgst:-false} 00:10:57.995 }, 00:10:57.995 "method": "bdev_nvme_attach_controller" 00:10:57.995 } 00:10:57.995 EOF 00:10:57.995 )") 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:57.995 [2024-07-25 09:59:37.058252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.995 [2024-07-25 09:59:37.058283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:57.995 09:59:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:57.995 "params": { 00:10:57.995 "name": "Nvme1", 00:10:57.995 "trtype": "tcp", 00:10:57.995 "traddr": "10.0.0.2", 00:10:57.995 "adrfam": "ipv4", 00:10:57.995 "trsvcid": "4420", 00:10:57.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:57.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:57.995 "hdgst": false, 00:10:57.995 "ddgst": false 00:10:57.995 }, 00:10:57.995 "method": "bdev_nvme_attach_controller" 00:10:57.995 }' 00:10:57.995 [2024-07-25 09:59:37.070251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.995 [2024-07-25 09:59:37.070259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.995 [2024-07-25 09:59:37.082275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.995 [2024-07-25 09:59:37.082281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.995 [2024-07-25 09:59:37.094305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.995 [2024-07-25 09:59:37.094313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.995 [2024-07-25 09:59:37.095712] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:57.995 [2024-07-25 09:59:37.095757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161195 ] 00:10:57.995 [2024-07-25 09:59:37.106336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.995 [2024-07-25 09:59:37.106343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.995 [2024-07-25 09:59:37.118367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:57.995 [2024-07-25 09:59:37.118374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:57.995 EAL: No free 2048 kB hugepages reported on node 1 00:10:58.256 [2024-07-25 09:59:37.130399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.130407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.142430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.142437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.154461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.154468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.154477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.256 [2024-07-25 09:59:37.166491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.166498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.178523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.178531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.190553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.190564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.202584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.202592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.214614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.214622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.218305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.256 [2024-07-25 09:59:37.226646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.226654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.238684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.238696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.250710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.250720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.262740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.262747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.274771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.274779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.286801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.286808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.298845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.298861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.310865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.310873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.322896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.322906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.334927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.334935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.346957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.346965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.358988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.358995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.371021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.371030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.256 [2024-07-25 09:59:37.383053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.256 [2024-07-25 09:59:37.383061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.517 [2024-07-25 09:59:37.395096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.517 [2024-07-25 09:59:37.395109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.517 Running I/O for 5 seconds... 00:10:58.517 [2024-07-25 09:59:37.407117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.517 [2024-07-25 09:59:37.407124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.517 [2024-07-25 09:59:37.427293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.517 [2024-07-25 09:59:37.427315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.517 [2024-07-25 09:59:37.438840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.517 [2024-07-25 09:59:37.438855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.452369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.452385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.465602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.465617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.478284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.478298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.491646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.491661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.504898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.504913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.518452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.518467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.531691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.531706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.544744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.544758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.557930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.557945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.571678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.571693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.584388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.584402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.597408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.597423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.610626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.610641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.624062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.624078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.636980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.636995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.518 [2024-07-25 09:59:37.650613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.518 [2024-07-25 09:59:37.650627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.663478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.663492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.676775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.676790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.690553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.690568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.703081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.703095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.716090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.716105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.728526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.728541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.741016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.741031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.753522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.753537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.766697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.766712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.779 [2024-07-25 09:59:37.779870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.779 [2024-07-25 09:59:37.779885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.793328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.793343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.806542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.806557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.819677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.819692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.832925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.832939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.846356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.846371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.859836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.859850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.873226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.873240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.886923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.886938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.900378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.900393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:58.780 [2024-07-25 09:59:37.913081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:58.780 [2024-07-25 09:59:37.913096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:37.925536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:37.925550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:37.938895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:37.938910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:37.952216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:37.952231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:37.965067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:37.965082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:37.978732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:37.978747] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:37.991334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:37.991349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.004190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.004210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.017507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.017522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.031101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.031116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.044122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.044137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.057566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.057581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.071039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.071054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.084192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.084211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.097562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.097576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.110491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.110505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.123079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.123094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.136409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.136424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.149740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.149756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.162264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.162279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.041 [2024-07-25 09:59:38.174282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.041 [2024-07-25 09:59:38.174297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.303 [2024-07-25 09:59:38.187548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.303 [2024-07-25 09:59:38.187565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.303 [2024-07-25 09:59:38.200079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.303 [2024-07-25 09:59:38.200094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.303 [2024-07-25 09:59:38.212857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.212872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.226303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.226318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.239381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.239396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.252259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.252274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.265565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.265579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.278671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.278686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.292130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.292145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.305489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.305504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.318884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.318899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.332067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.332082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.345439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.345454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.358530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.358545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.371712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.371727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.385018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.385033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.397498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.397513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.410364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.410382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.423262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.423277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.304 [2024-07-25 09:59:38.435613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.304 [2024-07-25 09:59:38.435627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.448721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.448736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.461895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.461909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.475184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.475199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.488769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.488784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.502147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.502163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.515582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.515596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.528815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.528830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.541979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.541994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.555394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.555409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.567695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.567709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.580856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.580871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.594185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.594204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.607496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.607511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.620641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.620656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.633853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.633869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.647113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.647128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.660302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.660321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.673828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.673843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.565 [2024-07-25 09:59:38.687146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.565 [2024-07-25 09:59:38.687160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.700543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.700558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.714183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.714197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.727902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.727917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.740789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.740804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.753521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.753535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.766675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.766689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.779395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.779409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.826 [2024-07-25 09:59:38.792820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.826 [2024-07-25 09:59:38.792835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.806194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.806212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.819155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.819170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.832652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.832667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.846242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.846257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.859828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.859842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.872933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.872948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.885679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.885693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.899275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.899289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.912687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.912706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.925424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.925439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.938808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.938823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:59.827 [2024-07-25 09:59:38.951438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:59.827 [2024-07-25 09:59:38.951452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:38.964578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:38.964594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:38.977684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:38.977700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:38.990788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:38.990802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.004143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.004158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.017209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.017224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.029745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.029759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.042276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.042291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.055327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.055341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.068779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.068793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.081567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.081581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.094520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.094535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.107242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.107257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.121038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.121053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.133831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.133846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.146735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.146750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.159319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.159337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.172544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.172558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.185739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.185754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.199030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.199045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.088 [2024-07-25 09:59:39.212471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.088 [2024-07-25 09:59:39.212485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.225148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.225163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.238239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.238254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.251318] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.251333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.264351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.264366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.277470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.277485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.290392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.290407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.303752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.303767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.317180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.317195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.329779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.329793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.342991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.343006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.356333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.356348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.369646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.369661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.383053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.383068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.396636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.396651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.410083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.410101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.423252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.423267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.436128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.436142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.449253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.449268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.462247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.462262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.350 [2024-07-25 09:59:39.475480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.350 [2024-07-25 09:59:39.475494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.488394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.488409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.501629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.501644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.514381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.514395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.527375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.527389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.540664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.540678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.553000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.553014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.565807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.565822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.579044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.579059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.592247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.592261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.605331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.605346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.618617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.618632] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.631600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.631615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.644717] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.644731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.658034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.658048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.670774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.670788] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.683393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.683407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.695900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.695915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.709433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.709448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.722235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.722250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.611 [2024-07-25 09:59:39.735166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.611 [2024-07-25 09:59:39.735181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.748429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.748444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.761246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.761260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.773191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.773211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.786626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.786641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.799528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.799543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.813030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.813045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.825935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.825950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.838682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.838697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.851868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.851884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.864439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.864455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.877388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.877403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.890702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.890717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.904113] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.904128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.917139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.917154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.929501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.929516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.942876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.942890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.956152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.956167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.969520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.969535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.982449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.982463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:00.873 [2024-07-25 09:59:39.995394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:00.873 [2024-07-25 09:59:39.995408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.008508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.008525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.021545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.021561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.034839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.034855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.047951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.047967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.060775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.060790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.073832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.073847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.086735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.134 [2024-07-25 09:59:40.086750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.134 [2024-07-25 09:59:40.100063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.100079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.113134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.113153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.126025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.126042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.139010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.139025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.151676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.151691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.165155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.165171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.178502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.178518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.190916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.190932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.203578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.203592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.216535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.216549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.229796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.229811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.243239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.243255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.135 [2024-07-25 09:59:40.256722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.135 [2024-07-25 09:59:40.256737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.270082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.270097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.283173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.283188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.296082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.296097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.309164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.309178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.321478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.321493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.334867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.334883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.348368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.348383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.361505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.361520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.374581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.374596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.387430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.387451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.400080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.400094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.412493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.412508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.425381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.425395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.438131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.438145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.450642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.450656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.463742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.463757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.476299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.476314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.489308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.489322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.502614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.502629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.515789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.515803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.396 [2024-07-25 09:59:40.528618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.396 [2024-07-25 09:59:40.528633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.541192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.541212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.554477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.554492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.567489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.567504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.580789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.580804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.593447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.593461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.606539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.606554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.619632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.619647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.632785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.632803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.646090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.646105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.659364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.659379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.672172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.672186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.684860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.684875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.698374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.698389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.711328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.711342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.724491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.724506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.737559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.737574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.750504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.750519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.763408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.763422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.776019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.776033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.657 [2024-07-25 09:59:40.788510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.657 [2024-07-25 09:59:40.788524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.800926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.800941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.814158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.814173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.827274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.827289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.840434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.840448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.852937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.852951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.865859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.865873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.878427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.878445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.892106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.892121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.905198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.905218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.918157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.918172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.931331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.931346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.918 [2024-07-25 09:59:40.944674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.918 [2024-07-25 09:59:40.944689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:40.957469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:40.957485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:40.970193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:40.970212] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:40.983246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:40.983261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:40.995950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:40.995965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:41.009645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:41.009660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:41.022759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:41.022773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:41.036149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:41.036164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:01.919 [2024-07-25 09:59:41.049141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:01.919 [2024-07-25 09:59:41.049156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.180 [2024-07-25 09:59:41.062024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.180 [2024-07-25 09:59:41.062039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.180 [2024-07-25 09:59:41.075370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.180 [2024-07-25 09:59:41.075384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.180 [2024-07-25 09:59:41.089114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.180 [2024-07-25 09:59:41.089129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.180 [2024-07-25 09:59:41.102291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.180 [2024-07-25 09:59:41.102306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.180 [2024-07-25 09:59:41.115480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.180 [2024-07-25 09:59:41.115495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.180 [2024-07-25 09:59:41.128581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.180 [2024-07-25 09:59:41.128602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.141751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.141766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.154530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.154545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.167730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.167745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.181132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.181146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.193671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.193689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.206784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.206799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.220088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.220102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.233167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.233181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.246311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.246326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.259368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.259382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.272920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.272935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.285852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.285867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.298818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.298832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.181 [2024-07-25 09:59:41.311446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.181 [2024-07-25 09:59:41.311461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.323826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.323841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.336895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.336910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.349786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.349801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.362954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.362968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.376311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.376329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.389795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.389810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.402434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.402449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.415385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.415400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.428862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.428877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.441942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.441958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.455176] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.455191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.468331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.468346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.481327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.481341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.494455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.494470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.507687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.507702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.520588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.520603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.533450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.533465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.545895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.545910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.559125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.559140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.572553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.572569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.585618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.585633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.478 [2024-07-25 09:59:41.598535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.478 [2024-07-25 09:59:41.598550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.611486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.611501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.624663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.624678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.637872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.637888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.651080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.651095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.663662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.663678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.676637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.676653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.689600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.689615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.702865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.702880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.715954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.715969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.728345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.728359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.741408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.741423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.754441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.754456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.767576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.767591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.780891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.780906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.794075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.794090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.807017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.807032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.820011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.820027] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.833080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.833096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.846189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.846208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.858670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.858685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:02.740 [2024-07-25 09:59:41.871609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:02.740 [2024-07-25 09:59:41.871624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.884650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.884665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.897023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.897037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.909858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.909872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.922320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.922335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.935525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.935539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.948654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.948669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.961228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.961243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.973819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.973834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.986923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.986939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:41.999436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:41.999451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.012744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.012758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.025231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.025246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.037647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.037662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.050593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.050607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.064172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.064186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.077477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.077491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.090860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.090875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.103974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.103989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.117168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.117183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.002 [2024-07-25 09:59:42.130337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.002 [2024-07-25 09:59:42.130352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.143497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.143512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.156621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.156635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.169508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.169522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.182095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.182110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.195264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.195279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.208355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.208370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.221517] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.221531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.234123] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.234137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.246853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.246868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.260188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.260207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.273564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.273579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.286272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.286287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.299412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.299426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.312727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.312741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.325665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.325679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.338645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.338660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.351919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.351938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.364837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.364852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.377792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.377807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.264 [2024-07-25 09:59:42.390379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.264 [2024-07-25 09:59:42.390393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.403422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.403438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.416087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.416102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 00:11:03.526 Latency(us) 00:11:03.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.526 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:03.526 Nvme1n1 : 5.00 19495.72 152.31 0.00 0.00 6558.99 2430.29 27088.21 00:11:03.526 =================================================================================================================== 00:11:03.526 Total : 19495.72 152.31 0.00 0.00 6558.99 2430.29 27088.21 00:11:03.526 [2024-07-25 09:59:42.425427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.425440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.437456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.437468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.449492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.449503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.461520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.461532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.473550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.473560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.485577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.485586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.497606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.497614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.509636] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.509643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.521668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.521676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.533697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.533706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.545726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.545739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 [2024-07-25 09:59:42.557754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:03.526 [2024-07-25 09:59:42.557761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:03.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1161195) - No such process 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1161195 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:03.526 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.527 delay0 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.527 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:03.527 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.527 [2024-07-25 09:59:42.659475] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:10.113 Initializing NVMe Controllers 00:11:10.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:10.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:10.113 Initialization complete. Launching workers. 00:11:10.113 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 79 00:11:10.113 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 363, failed to submit 36 00:11:10.113 success 171, unsuccess 192, failed 0 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:10.114 rmmod nvme_tcp 00:11:10.114 rmmod nvme_fabrics 00:11:10.114 rmmod nvme_keyring 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1158829 ']' 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1158829 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1158829 ']' 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1158829 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.114 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1158829 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1158829' 00:11:10.114 killing process with pid 1158829 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1158829 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1158829 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.114 09:59:49 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.659 00:11:12.659 real 0m32.919s 00:11:12.659 user 0m44.383s 00:11:12.659 sys 0m10.110s 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:12.659 ************************************ 00:11:12.659 END TEST nvmf_zcopy 00:11:12.659 ************************************ 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:12.659 ************************************ 00:11:12.659 START TEST nvmf_nmic 00:11:12.659 ************************************ 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:12.659 * Looking for test storage... 00:11:12.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:12.659 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.660 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:11:19.250 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:19.251 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:19.251 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:19.251 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:19.251 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:19.251 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:19.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.623 ms 00:11:19.513 00:11:19.513 --- 10.0.0.2 ping statistics --- 00:11:19.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.513 rtt min/avg/max/mdev = 0.623/0.623/0.623/0.000 ms 00:11:19.513 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:19.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:11:19.774 00:11:19.774 --- 10.0.0.1 ping statistics --- 00:11:19.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.774 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1167655 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1167655 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1167655 ']' 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.774 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:19.774 [2024-07-25 09:59:58.751285] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:19.774 [2024-07-25 09:59:58.751351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.774 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.774 [2024-07-25 09:59:58.822008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.774 [2024-07-25 09:59:58.898260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.774 [2024-07-25 09:59:58.898300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.774 [2024-07-25 09:59:58.898307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.774 [2024-07-25 09:59:58.898314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.774 [2024-07-25 09:59:58.898319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.774 [2024-07-25 09:59:58.898499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.774 [2024-07-25 09:59:58.898614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.775 [2024-07-25 09:59:58.898770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.775 [2024-07-25 09:59:58.898771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 [2024-07-25 09:59:59.589228] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 Malloc0 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 [2024-07-25 09:59:59.648644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:20.718 test case1: single bdev can't be used in multiple subsystems 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 [2024-07-25 09:59:59.684598] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:20.718 [2024-07-25 09:59:59.684617] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:20.718 [2024-07-25 09:59:59.684624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.718 request: 00:11:20.718 { 00:11:20.718 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:20.718 "namespace": { 00:11:20.718 "bdev_name": "Malloc0", 00:11:20.718 "no_auto_visible": false 00:11:20.718 }, 00:11:20.718 "method": "nvmf_subsystem_add_ns", 00:11:20.718 "req_id": 1 00:11:20.718 } 00:11:20.718 Got JSON-RPC error response 00:11:20.718 response: 00:11:20.718 { 00:11:20.718 "code": -32602, 00:11:20.718 "message": "Invalid parameters" 00:11:20.718 } 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:20.718 Adding namespace failed - expected result. 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:20.718 test case2: host connect to nvmf target in multiple paths 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:20.718 [2024-07-25 09:59:59.696730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.718 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.634 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:24.015 10:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:24.015 10:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.015 10:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:24.015 10:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:24.015 10:00:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:11:25.924 10:00:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:25.924 [global] 00:11:25.924 thread=1 00:11:25.924 invalidate=1 00:11:25.924 rw=write 00:11:25.924 time_based=1 00:11:25.924 runtime=1 00:11:25.924 ioengine=libaio 00:11:25.924 direct=1 00:11:25.924 bs=4096 00:11:25.924 iodepth=1 00:11:25.924 norandommap=0 00:11:25.924 numjobs=1 00:11:25.924 00:11:25.924 verify_dump=1 00:11:25.924 verify_backlog=512 00:11:25.924 verify_state_save=0 00:11:25.924 do_verify=1 00:11:25.924 verify=crc32c-intel 00:11:25.924 [job0] 00:11:25.924 filename=/dev/nvme0n1 00:11:25.924 Could not set queue depth (nvme0n1) 00:11:26.184 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.184 fio-3.35 00:11:26.184 Starting 1 thread 00:11:27.668 00:11:27.668 job0: (groupid=0, jobs=1): err= 0: pid=1169207: Thu Jul 25 10:00:06 2024 00:11:27.668 read: IOPS=11, BW=47.8KiB/s (49.0kB/s)(48.0KiB/1004msec) 00:11:27.668 slat (nsec): min=24125, max=25719, avg=24901.17, stdev=408.57 00:11:27.668 clat (usec): min=41846, max=42062, avg=41952.85, stdev=72.61 00:11:27.668 lat (usec): min=41870, max=42087, avg=41977.75, stdev=72.55 00:11:27.668 clat percentiles (usec): 00:11:27.668 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:27.668 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:11:27.668 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:27.668 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:27.668 | 99.99th=[42206] 00:11:27.668 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:11:27.668 slat (usec): min=10, max=27271, avg=86.53, stdev=1203.79 00:11:27.668 clat (usec): min=651, max=1031, avg=881.81, stdev=65.46 00:11:27.668 lat (usec): min=682, max=28227, avg=968.34, stdev=1208.83 00:11:27.668 clat percentiles (usec): 00:11:27.668 | 1.00th=[ 685], 5.00th=[ 766], 10.00th=[ 791], 20.00th=[ 832], 00:11:27.668 | 30.00th=[ 857], 40.00th=[ 889], 50.00th=[ 898], 60.00th=[ 906], 00:11:27.668 | 70.00th=[ 914], 80.00th=[ 930], 90.00th=[ 955], 95.00th=[ 979], 00:11:27.668 | 99.00th=[ 1020], 99.50th=[ 1029], 99.90th=[ 1029], 99.95th=[ 1029], 00:11:27.668 | 99.99th=[ 1029] 00:11:27.668 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:27.668 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:27.668 lat (usec) : 750=3.44%, 1000=92.75% 00:11:27.668 lat (msec) : 2=1.53%, 50=2.29% 00:11:27.668 cpu : usr=1.10%, sys=1.40%, ctx=527, majf=0, minf=1 00:11:27.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:27.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:27.668 issued rwts: total=12,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:27.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:27.668 00:11:27.668 Run status group 0 (all jobs): 00:11:27.668 READ: bw=47.8KiB/s (49.0kB/s), 47.8KiB/s-47.8KiB/s (49.0kB/s-49.0kB/s), io=48.0KiB (49.2kB), run=1004-1004msec 00:11:27.668 WRITE: bw=2040KiB/s (2089kB/s), 2040KiB/s-2040KiB/s (2089kB/s-2089kB/s), io=2048KiB (2097kB), run=1004-1004msec 00:11:27.668 00:11:27.668 Disk stats (read/write): 00:11:27.668 nvme0n1: ios=34/512, merge=0/0, ticks=1346/396, in_queue=1742, util=99.00% 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.668 rmmod nvme_tcp 00:11:27.668 rmmod nvme_fabrics 00:11:27.668 rmmod nvme_keyring 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1167655 ']' 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1167655 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1167655 ']' 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1167655 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1167655 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1167655' 00:11:27.668 killing process with pid 1167655 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1167655 00:11:27.668 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1167655 00:11:27.937 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.937 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.937 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.938 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.938 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.938 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.938 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.938 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.851 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.851 00:11:29.851 real 0m17.701s 00:11:29.851 user 0m49.211s 00:11:29.851 sys 0m6.206s 00:11:29.851 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:29.851 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:29.851 ************************************ 00:11:29.851 END TEST nvmf_nmic 00:11:29.851 ************************************ 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:30.113 ************************************ 00:11:30.113 START TEST nvmf_fio_target 00:11:30.113 ************************************ 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:30.113 * Looking for test storage... 00:11:30.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.113 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.114 10:00:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.267 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:38.268 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:38.268 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:38.268 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:38.268 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.268 10:00:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.879 ms 00:11:38.268 00:11:38.268 --- 10.0.0.2 ping statistics --- 00:11:38.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.268 rtt min/avg/max/mdev = 0.879/0.879/0.879/0.000 ms 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:11:38.268 00:11:38.268 --- 10.0.0.1 ping statistics --- 00:11:38.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.268 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1174319 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1174319 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1174319 ']' 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.268 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 [2024-07-25 10:00:16.414698] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:38.269 [2024-07-25 10:00:16.414772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.269 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.269 [2024-07-25 10:00:16.487012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.269 [2024-07-25 10:00:16.551940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.269 [2024-07-25 10:00:16.551980] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.269 [2024-07-25 10:00:16.551988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.269 [2024-07-25 10:00:16.551994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.269 [2024-07-25 10:00:16.552000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.269 [2024-07-25 10:00:16.553218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.269 [2024-07-25 10:00:16.553252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.269 [2024-07-25 10:00:16.553414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.269 [2024-07-25 10:00:16.553504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:38.269 [2024-07-25 10:00:16.832850] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.269 10:00:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.269 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:38.269 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.269 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:38.269 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.530 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:38.530 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:38.530 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:38.530 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:38.791 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.051 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:39.051 10:00:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.051 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:39.051 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:39.310 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:39.310 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:39.568 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:39.568 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.568 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:39.828 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:39.828 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:40.087 10:00:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.087 [2024-07-25 10:00:19.119190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.087 10:00:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:40.349 10:00:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:40.608 10:00:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:41.998 10:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:41.998 10:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:41.998 10:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.999 10:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:41.999 10:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:41.999 10:00:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:43.913 10:00:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:43.913 10:00:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:43.913 10:00:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.913 10:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:43.913 10:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.913 10:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:43.913 10:00:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:43.913 [global] 00:11:43.913 thread=1 00:11:43.913 invalidate=1 00:11:43.913 rw=write 00:11:43.913 time_based=1 00:11:43.913 runtime=1 00:11:43.913 ioengine=libaio 00:11:43.913 direct=1 00:11:43.913 bs=4096 00:11:43.913 iodepth=1 00:11:43.913 norandommap=0 00:11:43.913 numjobs=1 00:11:43.913 00:11:43.913 verify_dump=1 00:11:43.913 verify_backlog=512 00:11:43.913 verify_state_save=0 00:11:43.913 do_verify=1 00:11:43.913 verify=crc32c-intel 00:11:44.192 [job0] 00:11:44.192 filename=/dev/nvme0n1 00:11:44.192 [job1] 00:11:44.192 filename=/dev/nvme0n2 00:11:44.192 [job2] 00:11:44.192 filename=/dev/nvme0n3 00:11:44.192 [job3] 00:11:44.192 filename=/dev/nvme0n4 00:11:44.192 Could not set queue depth (nvme0n1) 00:11:44.192 Could not set queue depth (nvme0n2) 00:11:44.192 Could not set queue depth (nvme0n3) 00:11:44.192 Could not set queue depth (nvme0n4) 00:11:44.457 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.457 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.457 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.457 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:44.457 fio-3.35 00:11:44.457 Starting 4 threads 00:11:45.875 00:11:45.875 job0: (groupid=0, jobs=1): err= 0: pid=1175912: Thu Jul 25 10:00:24 2024 00:11:45.875 read: IOPS=16, BW=66.6KiB/s (68.2kB/s)(68.0KiB/1021msec) 00:11:45.875 slat (nsec): min=23920, max=24366, avg=24063.65, stdev=138.65 00:11:45.875 clat (usec): min=1309, max=42982, avg=39989.76, stdev=9976.92 00:11:45.875 lat (usec): min=1333, max=43006, avg=40013.82, stdev=9976.95 00:11:45.875 clat percentiles (usec): 00:11:45.875 | 1.00th=[ 1303], 5.00th=[ 1303], 10.00th=[41681], 20.00th=[42206], 00:11:45.875 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:11:45.875 | 70.00th=[42730], 80.00th=[42730], 90.00th=[42730], 95.00th=[42730], 00:11:45.875 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:45.875 | 99.99th=[42730] 00:11:45.875 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:11:45.875 slat (nsec): min=8640, max=69193, avg=22429.57, stdev=11144.18 00:11:45.875 clat (usec): min=224, max=1249, avg=638.47, stdev=255.45 00:11:45.875 lat (usec): min=236, max=1280, avg=660.90, stdev=262.34 00:11:45.875 clat percentiles (usec): 00:11:45.875 | 1.00th=[ 277], 5.00th=[ 314], 10.00th=[ 355], 20.00th=[ 416], 00:11:45.875 | 30.00th=[ 445], 40.00th=[ 494], 50.00th=[ 537], 60.00th=[ 594], 00:11:45.875 | 70.00th=[ 881], 80.00th=[ 938], 90.00th=[ 996], 95.00th=[ 1037], 00:11:45.875 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1254], 99.95th=[ 1254], 00:11:45.875 | 99.99th=[ 1254] 00:11:45.875 bw ( KiB/s): min= 4087, max= 4087, per=50.94%, avg=4087.00, stdev= 0.00, samples=1 00:11:45.875 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:45.875 lat (usec) : 250=0.19%, 500=40.26%, 750=21.55%, 1000=25.33% 00:11:45.875 lat (msec) : 2=9.64%, 50=3.02% 00:11:45.875 cpu : usr=0.20%, sys=1.76%, ctx=530, majf=0, minf=1 00:11:45.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.875 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.875 job1: (groupid=0, jobs=1): err= 0: pid=1175913: Thu Jul 25 10:00:24 2024 00:11:45.875 read: IOPS=12, BW=51.7KiB/s (53.0kB/s)(52.0KiB/1005msec) 00:11:45.875 slat (nsec): min=25056, max=27201, avg=25725.85, stdev=516.75 00:11:45.875 clat (usec): min=41762, max=43105, avg=42231.51, stdev=500.78 00:11:45.875 lat (usec): min=41787, max=43132, avg=42257.23, stdev=501.01 00:11:45.875 clat percentiles (usec): 00:11:45.875 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:11:45.875 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:45.875 | 70.00th=[42206], 80.00th=[43254], 90.00th=[43254], 95.00th=[43254], 00:11:45.875 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:45.875 | 99.99th=[43254] 00:11:45.875 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:45.875 slat (nsec): min=9225, max=50578, avg=31172.16, stdev=6638.74 00:11:45.875 clat (usec): min=534, max=1118, avg=852.01, stdev=134.28 00:11:45.875 lat (usec): min=547, max=1151, avg=883.18, stdev=136.51 00:11:45.875 clat percentiles (usec): 00:11:45.875 | 1.00th=[ 553], 5.00th=[ 619], 10.00th=[ 644], 20.00th=[ 717], 00:11:45.875 | 30.00th=[ 766], 40.00th=[ 840], 50.00th=[ 881], 60.00th=[ 922], 00:11:45.875 | 70.00th=[ 947], 80.00th=[ 971], 90.00th=[ 1004], 95.00th=[ 1029], 00:11:45.875 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1123], 99.95th=[ 1123], 00:11:45.875 | 99.99th=[ 1123] 00:11:45.875 bw ( KiB/s): min= 4096, max= 4096, per=51.05%, avg=4096.00, stdev= 0.00, samples=1 00:11:45.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:45.875 lat (usec) : 750=26.48%, 1000=60.19% 00:11:45.875 lat (msec) : 2=10.86%, 50=2.48% 00:11:45.875 cpu : usr=1.00%, sys=2.09%, ctx=525, majf=0, minf=1 00:11:45.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.875 issued rwts: total=13,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.875 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.875 job2: (groupid=0, jobs=1): err= 0: pid=1175914: Thu Jul 25 10:00:24 2024 00:11:45.875 read: IOPS=420, BW=1682KiB/s (1723kB/s)(1684KiB/1001msec) 00:11:45.875 slat (nsec): min=6887, max=60998, avg=26519.88, stdev=4704.10 00:11:45.875 clat (usec): min=679, max=1439, avg=1233.89, stdev=107.99 00:11:45.875 lat (usec): min=692, max=1466, avg=1260.41, stdev=109.94 00:11:45.875 clat percentiles (usec): 00:11:45.875 | 1.00th=[ 816], 5.00th=[ 1020], 10.00th=[ 1106], 20.00th=[ 1172], 00:11:45.875 | 30.00th=[ 1221], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270], 00:11:45.875 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1352], 00:11:45.875 | 99.00th=[ 1401], 99.50th=[ 1418], 99.90th=[ 1434], 99.95th=[ 1434], 00:11:45.875 | 99.99th=[ 1434] 00:11:45.875 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:45.875 slat (nsec): min=9133, max=71182, avg=33428.44, stdev=7263.52 00:11:45.875 clat (usec): min=222, max=1182, avg=868.52, stdev=178.60 00:11:45.875 lat (usec): min=233, max=1217, avg=901.95, stdev=182.79 00:11:45.875 clat percentiles (usec): 00:11:45.875 | 1.00th=[ 302], 5.00th=[ 400], 10.00th=[ 562], 20.00th=[ 816], 00:11:45.875 | 30.00th=[ 857], 40.00th=[ 889], 50.00th=[ 914], 60.00th=[ 947], 00:11:45.875 | 70.00th=[ 963], 80.00th=[ 988], 90.00th=[ 1020], 95.00th=[ 1057], 00:11:45.875 | 99.00th=[ 1106], 99.50th=[ 1156], 99.90th=[ 1188], 99.95th=[ 1188], 00:11:45.875 | 99.99th=[ 1188] 00:11:45.875 bw ( KiB/s): min= 4096, max= 4096, per=51.05%, avg=4096.00, stdev= 0.00, samples=1 00:11:45.875 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:45.875 lat (usec) : 250=0.21%, 500=3.54%, 750=4.39%, 1000=40.41% 00:11:45.875 lat (msec) : 2=51.45% 00:11:45.875 cpu : usr=2.10%, sys=3.70%, ctx=934, majf=0, minf=1 00:11:45.875 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.875 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.875 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.875 issued rwts: total=421,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.876 job3: (groupid=0, jobs=1): err= 0: pid=1175915: Thu Jul 25 10:00:24 2024 00:11:45.876 read: IOPS=396, BW=1586KiB/s (1624kB/s)(1588KiB/1001msec) 00:11:45.876 slat (nsec): min=25010, max=47089, avg=26053.85, stdev=3094.81 00:11:45.876 clat (usec): min=1074, max=1460, avg=1300.40, stdev=58.19 00:11:45.876 lat (usec): min=1100, max=1486, avg=1326.45, stdev=58.09 00:11:45.876 clat percentiles (usec): 00:11:45.876 | 1.00th=[ 1139], 5.00th=[ 1205], 10.00th=[ 1221], 20.00th=[ 1254], 00:11:45.876 | 30.00th=[ 1270], 40.00th=[ 1287], 50.00th=[ 1303], 60.00th=[ 1319], 00:11:45.876 | 70.00th=[ 1336], 80.00th=[ 1352], 90.00th=[ 1369], 95.00th=[ 1401], 00:11:45.876 | 99.00th=[ 1434], 99.50th=[ 1450], 99.90th=[ 1467], 99.95th=[ 1467], 00:11:45.876 | 99.99th=[ 1467] 00:11:45.876 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:45.876 slat (nsec): min=8949, max=49922, avg=31785.25, stdev=7226.59 00:11:45.876 clat (usec): min=226, max=1176, avg=878.55, stdev=197.52 00:11:45.876 lat (usec): min=235, max=1210, avg=910.34, stdev=203.02 00:11:45.876 clat percentiles (usec): 00:11:45.876 | 1.00th=[ 260], 5.00th=[ 388], 10.00th=[ 502], 20.00th=[ 848], 00:11:45.876 | 30.00th=[ 889], 40.00th=[ 922], 50.00th=[ 938], 60.00th=[ 963], 00:11:45.876 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1029], 95.00th=[ 1057], 00:11:45.876 | 99.00th=[ 1106], 99.50th=[ 1123], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:45.876 | 99.99th=[ 1172] 00:11:45.876 bw ( KiB/s): min= 4087, max= 4087, per=50.94%, avg=4087.00, stdev= 0.00, samples=1 00:11:45.876 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:11:45.876 lat (usec) : 250=0.55%, 500=5.06%, 750=2.53%, 1000=35.86% 00:11:45.876 lat (msec) : 2=56.00% 00:11:45.876 cpu : usr=0.80%, sys=3.40%, ctx=911, majf=0, minf=1 00:11:45.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:45.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:45.876 issued rwts: total=397,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:45.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:45.876 00:11:45.876 Run status group 0 (all jobs): 00:11:45.876 READ: bw=3322KiB/s (3402kB/s), 51.7KiB/s-1682KiB/s (53.0kB/s-1723kB/s), io=3392KiB (3473kB), run=1001-1021msec 00:11:45.876 WRITE: bw=8024KiB/s (8216kB/s), 2006KiB/s-2046KiB/s (2054kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1021msec 00:11:45.876 00:11:45.876 Disk stats (read/write): 00:11:45.876 nvme0n1: ios=62/512, merge=0/0, ticks=560/326, in_queue=886, util=88.08% 00:11:45.876 nvme0n2: ios=42/512, merge=0/0, ticks=734/413, in_queue=1147, util=92.70% 00:11:45.876 nvme0n3: ios=345/512, merge=0/0, ticks=1145/377, in_queue=1522, util=96.06% 00:11:45.876 nvme0n4: ios=291/512, merge=0/0, ticks=1280/451, in_queue=1731, util=96.34% 00:11:45.876 10:00:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:45.876 [global] 00:11:45.876 thread=1 00:11:45.876 invalidate=1 00:11:45.876 rw=randwrite 00:11:45.876 time_based=1 00:11:45.876 runtime=1 00:11:45.876 ioengine=libaio 00:11:45.876 direct=1 00:11:45.876 bs=4096 00:11:45.876 iodepth=1 00:11:45.876 norandommap=0 00:11:45.876 numjobs=1 00:11:45.876 00:11:45.876 verify_dump=1 00:11:45.876 verify_backlog=512 00:11:45.876 verify_state_save=0 00:11:45.876 do_verify=1 00:11:45.876 verify=crc32c-intel 00:11:45.876 [job0] 00:11:45.876 filename=/dev/nvme0n1 00:11:45.876 [job1] 00:11:45.876 filename=/dev/nvme0n2 00:11:45.876 [job2] 00:11:45.876 filename=/dev/nvme0n3 00:11:45.876 [job3] 00:11:45.876 filename=/dev/nvme0n4 00:11:45.876 Could not set queue depth (nvme0n1) 00:11:45.876 Could not set queue depth (nvme0n2) 00:11:45.876 Could not set queue depth (nvme0n3) 00:11:45.876 Could not set queue depth (nvme0n4) 00:11:46.141 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.141 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.141 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.141 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:46.141 fio-3.35 00:11:46.141 Starting 4 threads 00:11:47.553 00:11:47.553 job0: (groupid=0, jobs=1): err= 0: pid=1176439: Thu Jul 25 10:00:26 2024 00:11:47.553 read: IOPS=39, BW=159KiB/s (163kB/s)(160KiB/1008msec) 00:11:47.553 slat (nsec): min=24235, max=24953, avg=24514.85, stdev=151.50 00:11:47.554 clat (usec): min=1132, max=43045, avg=12655.75, stdev=18511.58 00:11:47.554 lat (usec): min=1157, max=43070, avg=12680.26, stdev=18511.56 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 1139], 5.00th=[ 1139], 10.00th=[ 1237], 20.00th=[ 1352], 00:11:47.554 | 30.00th=[ 1401], 40.00th=[ 1434], 50.00th=[ 1467], 60.00th=[ 1500], 00:11:47.554 | 70.00th=[ 1582], 80.00th=[41681], 90.00th=[42730], 95.00th=[42730], 00:11:47.554 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:11:47.554 | 99.99th=[43254] 00:11:47.554 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:47.554 slat (nsec): min=9368, max=70164, avg=30463.65, stdev=4414.31 00:11:47.554 clat (usec): min=620, max=1359, avg=938.02, stdev=98.86 00:11:47.554 lat (usec): min=650, max=1389, avg=968.49, stdev=99.32 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 701], 5.00th=[ 791], 10.00th=[ 824], 20.00th=[ 865], 00:11:47.554 | 30.00th=[ 898], 40.00th=[ 914], 50.00th=[ 938], 60.00th=[ 955], 00:11:47.554 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1045], 95.00th=[ 1123], 00:11:47.554 | 99.00th=[ 1237], 99.50th=[ 1303], 99.90th=[ 1352], 99.95th=[ 1352], 00:11:47.554 | 99.99th=[ 1352] 00:11:47.554 bw ( KiB/s): min= 4096, max= 4096, per=50.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.554 lat (usec) : 750=2.36%, 1000=73.91% 00:11:47.554 lat (msec) : 2=21.74%, 50=1.99% 00:11:47.554 cpu : usr=1.19%, sys=1.39%, ctx=553, majf=0, minf=1 00:11:47.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 issued rwts: total=40,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.554 job1: (groupid=0, jobs=1): err= 0: pid=1176440: Thu Jul 25 10:00:26 2024 00:11:47.554 read: IOPS=393, BW=1574KiB/s (1612kB/s)(1576KiB/1001msec) 00:11:47.554 slat (nsec): min=23496, max=58979, avg=24716.02, stdev=4176.22 00:11:47.554 clat (usec): min=824, max=1525, avg=1256.24, stdev=59.36 00:11:47.554 lat (usec): min=848, max=1549, avg=1280.96, stdev=59.35 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 1090], 5.00th=[ 1156], 10.00th=[ 1188], 20.00th=[ 1221], 00:11:47.554 | 30.00th=[ 1237], 40.00th=[ 1237], 50.00th=[ 1254], 60.00th=[ 1270], 00:11:47.554 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1319], 95.00th=[ 1336], 00:11:47.554 | 99.00th=[ 1418], 99.50th=[ 1418], 99.90th=[ 1532], 99.95th=[ 1532], 00:11:47.554 | 99.99th=[ 1532] 00:11:47.554 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:11:47.554 slat (nsec): min=10299, max=47712, avg=30125.61, stdev=2657.39 00:11:47.554 clat (usec): min=582, max=1173, avg=922.98, stdev=88.70 00:11:47.554 lat (usec): min=612, max=1204, avg=953.11, stdev=88.63 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 693], 5.00th=[ 766], 10.00th=[ 799], 20.00th=[ 857], 00:11:47.554 | 30.00th=[ 889], 40.00th=[ 906], 50.00th=[ 922], 60.00th=[ 955], 00:11:47.554 | 70.00th=[ 963], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1074], 00:11:47.554 | 99.00th=[ 1123], 99.50th=[ 1156], 99.90th=[ 1172], 99.95th=[ 1172], 00:11:47.554 | 99.99th=[ 1172] 00:11:47.554 bw ( KiB/s): min= 4096, max= 4096, per=50.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.554 lat (usec) : 750=1.99%, 1000=44.04% 00:11:47.554 lat (msec) : 2=53.97% 00:11:47.554 cpu : usr=0.90%, sys=3.20%, ctx=907, majf=0, minf=1 00:11:47.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 issued rwts: total=394,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.554 job2: (groupid=0, jobs=1): err= 0: pid=1176441: Thu Jul 25 10:00:26 2024 00:11:47.554 read: IOPS=182, BW=731KiB/s (748kB/s)(732KiB/1002msec) 00:11:47.554 slat (nsec): min=8210, max=46119, avg=24736.10, stdev=3148.67 00:11:47.554 clat (usec): min=1070, max=42931, avg=2807.23, stdev=7317.18 00:11:47.554 lat (usec): min=1091, max=42956, avg=2831.97, stdev=7317.16 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 1106], 5.00th=[ 1270], 10.00th=[ 1336], 20.00th=[ 1369], 00:11:47.554 | 30.00th=[ 1385], 40.00th=[ 1418], 50.00th=[ 1418], 60.00th=[ 1434], 00:11:47.554 | 70.00th=[ 1467], 80.00th=[ 1483], 90.00th=[ 1532], 95.00th=[ 1795], 00:11:47.554 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:47.554 | 99.99th=[42730] 00:11:47.554 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:11:47.554 slat (nsec): min=9289, max=48794, avg=28243.14, stdev=7049.43 00:11:47.554 clat (usec): min=612, max=1113, avg=904.55, stdev=92.19 00:11:47.554 lat (usec): min=624, max=1144, avg=932.80, stdev=95.63 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 635], 5.00th=[ 717], 10.00th=[ 766], 20.00th=[ 832], 00:11:47.554 | 30.00th=[ 865], 40.00th=[ 906], 50.00th=[ 930], 60.00th=[ 947], 00:11:47.554 | 70.00th=[ 963], 80.00th=[ 979], 90.00th=[ 1004], 95.00th=[ 1020], 00:11:47.554 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1106], 99.95th=[ 1106], 00:11:47.554 | 99.99th=[ 1106] 00:11:47.554 bw ( KiB/s): min= 4096, max= 4096, per=50.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.554 lat (usec) : 750=6.33%, 1000=60.00% 00:11:47.554 lat (msec) : 2=32.66%, 10=0.14%, 50=0.86% 00:11:47.554 cpu : usr=1.00%, sys=2.00%, ctx=695, majf=0, minf=1 00:11:47.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 issued rwts: total=183,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.554 job3: (groupid=0, jobs=1): err= 0: pid=1176442: Thu Jul 25 10:00:26 2024 00:11:47.554 read: IOPS=49, BW=199KiB/s (204kB/s)(200KiB/1005msec) 00:11:47.554 slat (nsec): min=7097, max=44053, avg=24522.62, stdev=4550.84 00:11:47.554 clat (usec): min=1110, max=42580, avg=10298.84, stdev=17000.61 00:11:47.554 lat (usec): min=1135, max=42605, avg=10323.36, stdev=17000.67 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 1106], 5.00th=[ 1123], 10.00th=[ 1172], 20.00th=[ 1254], 00:11:47.554 | 30.00th=[ 1270], 40.00th=[ 1303], 50.00th=[ 1385], 60.00th=[ 1467], 00:11:47.554 | 70.00th=[ 1647], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:47.554 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:11:47.554 | 99.99th=[42730] 00:11:47.554 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:11:47.554 slat (nsec): min=10521, max=49299, avg=30795.11, stdev=3680.36 00:11:47.554 clat (usec): min=590, max=1255, avg=914.32, stdev=91.04 00:11:47.554 lat (usec): min=621, max=1293, avg=945.11, stdev=90.89 00:11:47.554 clat percentiles (usec): 00:11:47.554 | 1.00th=[ 676], 5.00th=[ 758], 10.00th=[ 807], 20.00th=[ 848], 00:11:47.554 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[ 922], 60.00th=[ 938], 00:11:47.554 | 70.00th=[ 955], 80.00th=[ 979], 90.00th=[ 1012], 95.00th=[ 1057], 00:11:47.554 | 99.00th=[ 1156], 99.50th=[ 1237], 99.90th=[ 1254], 99.95th=[ 1254], 00:11:47.554 | 99.99th=[ 1254] 00:11:47.554 bw ( KiB/s): min= 4096, max= 4096, per=50.40%, avg=4096.00, stdev= 0.00, samples=1 00:11:47.554 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:47.554 lat (usec) : 750=3.56%, 1000=75.09% 00:11:47.554 lat (msec) : 2=19.40%, 50=1.96% 00:11:47.554 cpu : usr=0.70%, sys=1.89%, ctx=562, majf=0, minf=1 00:11:47.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:47.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:47.554 issued rwts: total=50,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:47.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:47.554 00:11:47.554 Run status group 0 (all jobs): 00:11:47.554 READ: bw=2647KiB/s (2710kB/s), 159KiB/s-1574KiB/s (163kB/s-1612kB/s), io=2668KiB (2732kB), run=1001-1008msec 00:11:47.554 WRITE: bw=8127KiB/s (8322kB/s), 2032KiB/s-2046KiB/s (2081kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1008msec 00:11:47.554 00:11:47.554 Disk stats (read/write): 00:11:47.554 nvme0n1: ios=85/512, merge=0/0, ticks=452/480, in_queue=932, util=92.79% 00:11:47.554 nvme0n2: ios=308/512, merge=0/0, ticks=398/448, in_queue=846, util=88.89% 00:11:47.554 nvme0n3: ios=70/512, merge=0/0, ticks=355/445, in_queue=800, util=88.41% 00:11:47.554 nvme0n4: ios=46/512, merge=0/0, ticks=346/457, in_queue=803, util=89.54% 00:11:47.554 10:00:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:47.554 [global] 00:11:47.554 thread=1 00:11:47.554 invalidate=1 00:11:47.554 rw=write 00:11:47.554 time_based=1 00:11:47.554 runtime=1 00:11:47.554 ioengine=libaio 00:11:47.554 direct=1 00:11:47.554 bs=4096 00:11:47.554 iodepth=128 00:11:47.554 norandommap=0 00:11:47.554 numjobs=1 00:11:47.554 00:11:47.554 verify_dump=1 00:11:47.554 verify_backlog=512 00:11:47.554 verify_state_save=0 00:11:47.554 do_verify=1 00:11:47.554 verify=crc32c-intel 00:11:47.554 [job0] 00:11:47.554 filename=/dev/nvme0n1 00:11:47.554 [job1] 00:11:47.554 filename=/dev/nvme0n2 00:11:47.554 [job2] 00:11:47.554 filename=/dev/nvme0n3 00:11:47.554 [job3] 00:11:47.554 filename=/dev/nvme0n4 00:11:47.554 Could not set queue depth (nvme0n1) 00:11:47.554 Could not set queue depth (nvme0n2) 00:11:47.554 Could not set queue depth (nvme0n3) 00:11:47.554 Could not set queue depth (nvme0n4) 00:11:47.816 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.816 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.816 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.816 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:47.816 fio-3.35 00:11:47.816 Starting 4 threads 00:11:49.228 00:11:49.228 job0: (groupid=0, jobs=1): err= 0: pid=1176961: Thu Jul 25 10:00:28 2024 00:11:49.228 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:11:49.228 slat (nsec): min=894, max=15708k, avg=101252.16, stdev=735627.87 00:11:49.228 clat (usec): min=4948, max=34201, avg=13583.60, stdev=3734.56 00:11:49.228 lat (usec): min=4950, max=34209, avg=13684.85, stdev=3778.21 00:11:49.228 clat percentiles (usec): 00:11:49.228 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10683], 00:11:49.228 | 30.00th=[11207], 40.00th=[12256], 50.00th=[12911], 60.00th=[13829], 00:11:49.228 | 70.00th=[14746], 80.00th=[16712], 90.00th=[17957], 95.00th=[19792], 00:11:49.228 | 99.00th=[26346], 99.50th=[28181], 99.90th=[34341], 99.95th=[34341], 00:11:49.228 | 99.99th=[34341] 00:11:49.228 write: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1004msec); 0 zone resets 00:11:49.228 slat (nsec): min=1593, max=8005.3k, avg=109980.16, stdev=596809.79 00:11:49.228 clat (usec): min=1104, max=34219, avg=13969.50, stdev=5829.87 00:11:49.228 lat (usec): min=1114, max=34244, avg=14079.48, stdev=5847.54 00:11:49.228 clat percentiles (usec): 00:11:49.228 | 1.00th=[ 4883], 5.00th=[ 7046], 10.00th=[ 8455], 20.00th=[ 9503], 00:11:49.228 | 30.00th=[10290], 40.00th=[11207], 50.00th=[12387], 60.00th=[13304], 00:11:49.228 | 70.00th=[15401], 80.00th=[18482], 90.00th=[23200], 95.00th=[26608], 00:11:49.228 | 99.00th=[29754], 99.50th=[30540], 99.90th=[31851], 99.95th=[31851], 00:11:49.228 | 99.99th=[34341] 00:11:49.228 bw ( KiB/s): min=18216, max=18648, per=21.06%, avg=18432.00, stdev=305.47, samples=2 00:11:49.228 iops : min= 4554, max= 4662, avg=4608.00, stdev=76.37, samples=2 00:11:49.228 lat (msec) : 2=0.02%, 4=0.39%, 10=19.49%, 20=69.35%, 50=10.76% 00:11:49.228 cpu : usr=3.29%, sys=4.99%, ctx=429, majf=0, minf=1 00:11:49.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:49.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.228 issued rwts: total=4608,4634,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.228 job1: (groupid=0, jobs=1): err= 0: pid=1176962: Thu Jul 25 10:00:28 2024 00:11:49.228 read: IOPS=2694, BW=10.5MiB/s (11.0MB/s)(11.0MiB/1046msec) 00:11:49.228 slat (nsec): min=897, max=24679k, avg=198194.02, stdev=1384337.81 00:11:49.228 clat (usec): min=8372, max=83510, avg=26639.39, stdev=13629.25 00:11:49.228 lat (usec): min=8382, max=83514, avg=26837.58, stdev=13718.86 00:11:49.228 clat percentiles (usec): 00:11:49.228 | 1.00th=[12125], 5.00th=[12649], 10.00th=[13435], 20.00th=[14484], 00:11:49.228 | 30.00th=[16319], 40.00th=[20841], 50.00th=[25297], 60.00th=[27132], 00:11:49.228 | 70.00th=[33162], 80.00th=[36439], 90.00th=[39584], 95.00th=[45876], 00:11:49.228 | 99.00th=[83362], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:11:49.228 | 99.99th=[83362] 00:11:49.228 write: IOPS=2936, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1046msec); 0 zone resets 00:11:49.228 slat (nsec): min=1596, max=16080k, avg=138053.42, stdev=1016511.29 00:11:49.228 clat (usec): min=5593, max=45700, avg=18564.47, stdev=6570.64 00:11:49.228 lat (usec): min=5601, max=45731, avg=18702.53, stdev=6646.27 00:11:49.228 clat percentiles (usec): 00:11:49.228 | 1.00th=[ 6783], 5.00th=[ 9503], 10.00th=[11207], 20.00th=[12256], 00:11:49.228 | 30.00th=[13698], 40.00th=[15795], 50.00th=[17433], 60.00th=[20055], 00:11:49.228 | 70.00th=[21103], 80.00th=[25822], 90.00th=[28443], 95.00th=[29754], 00:11:49.228 | 99.00th=[33817], 99.50th=[33817], 99.90th=[42206], 99.95th=[42730], 00:11:49.228 | 99.99th=[45876] 00:11:49.228 bw ( KiB/s): min=12288, max=12288, per=14.04%, avg=12288.00, stdev= 0.00, samples=2 00:11:49.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:49.228 lat (msec) : 10=3.16%, 20=46.30%, 50=48.29%, 100=2.26% 00:11:49.228 cpu : usr=2.58%, sys=2.97%, ctx=179, majf=0, minf=2 00:11:49.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:49.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.228 issued rwts: total=2818,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.228 job2: (groupid=0, jobs=1): err= 0: pid=1176963: Thu Jul 25 10:00:28 2024 00:11:49.228 read: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec) 00:11:49.228 slat (nsec): min=984, max=7514.5k, avg=69932.16, stdev=488025.98 00:11:49.228 clat (usec): min=4171, max=18322, avg=9811.13, stdev=2115.42 00:11:49.228 lat (usec): min=4177, max=18335, avg=9881.06, stdev=2132.85 00:11:49.228 clat percentiles (usec): 00:11:49.228 | 1.00th=[ 5735], 5.00th=[ 6587], 10.00th=[ 7308], 20.00th=[ 7963], 00:11:49.228 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[10028], 00:11:49.228 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12780], 95.00th=[13698], 00:11:49.228 | 99.00th=[15270], 99.50th=[15401], 99.90th=[16909], 99.95th=[16909], 00:11:49.228 | 99.99th=[18220] 00:11:49.228 write: IOPS=6966, BW=27.2MiB/s (28.5MB/s)(27.3MiB/1003msec); 0 zone resets 00:11:49.228 slat (nsec): min=1680, max=6972.3k, avg=71748.02, stdev=414726.50 00:11:49.228 clat (usec): min=1169, max=20293, avg=8845.69, stdev=2789.58 00:11:49.228 lat (usec): min=1178, max=20295, avg=8917.44, stdev=2797.17 00:11:49.228 clat percentiles (usec): 00:11:49.228 | 1.00th=[ 3458], 5.00th=[ 4686], 10.00th=[ 5538], 20.00th=[ 6456], 00:11:49.228 | 30.00th=[ 7177], 40.00th=[ 7963], 50.00th=[ 8586], 60.00th=[ 9110], 00:11:49.228 | 70.00th=[10159], 80.00th=[10945], 90.00th=[12518], 95.00th=[14222], 00:11:49.228 | 99.00th=[15926], 99.50th=[16450], 99.90th=[18744], 99.95th=[18744], 00:11:49.228 | 99.99th=[20317] 00:11:49.228 bw ( KiB/s): min=27352, max=27528, per=31.35%, avg=27440.00, stdev=124.45, samples=2 00:11:49.228 iops : min= 6838, max= 6882, avg=6860.00, stdev=31.11, samples=2 00:11:49.228 lat (msec) : 2=0.02%, 4=1.14%, 10=62.63%, 20=36.20%, 50=0.01% 00:11:49.228 cpu : usr=3.99%, sys=6.79%, ctx=589, majf=0, minf=1 00:11:49.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:11:49.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.228 issued rwts: total=6656,6987,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.229 job3: (groupid=0, jobs=1): err= 0: pid=1176964: Thu Jul 25 10:00:28 2024 00:11:49.229 read: IOPS=7745, BW=30.3MiB/s (31.7MB/s)(30.5MiB/1008msec) 00:11:49.229 slat (nsec): min=987, max=6867.4k, avg=62497.27, stdev=428056.34 00:11:49.229 clat (usec): min=2761, max=15192, avg=8314.92, stdev=1794.74 00:11:49.229 lat (usec): min=3704, max=15221, avg=8377.42, stdev=1811.58 00:11:49.229 clat percentiles (usec): 00:11:49.229 | 1.00th=[ 5407], 5.00th=[ 6063], 10.00th=[ 6456], 20.00th=[ 6849], 00:11:49.229 | 30.00th=[ 7242], 40.00th=[ 7504], 50.00th=[ 7898], 60.00th=[ 8356], 00:11:49.229 | 70.00th=[ 8848], 80.00th=[ 9634], 90.00th=[11076], 95.00th=[11863], 00:11:49.229 | 99.00th=[13304], 99.50th=[13829], 99.90th=[14877], 99.95th=[14877], 00:11:49.229 | 99.99th=[15139] 00:11:49.229 write: IOPS=8126, BW=31.7MiB/s (33.3MB/s)(32.0MiB/1008msec); 0 zone resets 00:11:49.229 slat (nsec): min=1640, max=11737k, avg=58885.13, stdev=350356.64 00:11:49.229 clat (usec): min=1267, max=18370, avg=7471.24, stdev=2173.36 00:11:49.229 lat (usec): min=1277, max=18379, avg=7530.13, stdev=2169.77 00:11:49.229 clat percentiles (usec): 00:11:49.229 | 1.00th=[ 2999], 5.00th=[ 4293], 10.00th=[ 4948], 20.00th=[ 6063], 00:11:49.229 | 30.00th=[ 6718], 40.00th=[ 7111], 50.00th=[ 7373], 60.00th=[ 7570], 00:11:49.229 | 70.00th=[ 7767], 80.00th=[ 8356], 90.00th=[ 9896], 95.00th=[11207], 00:11:49.229 | 99.00th=[15926], 99.50th=[16909], 99.90th=[17433], 99.95th=[17957], 00:11:49.229 | 99.99th=[18482] 00:11:49.229 bw ( KiB/s): min=32760, max=32768, per=37.44%, avg=32764.00, stdev= 5.66, samples=2 00:11:49.229 iops : min= 8190, max= 8192, avg=8191.00, stdev= 1.41, samples=2 00:11:49.229 lat (msec) : 2=0.04%, 4=1.97%, 10=85.26%, 20=12.74% 00:11:49.229 cpu : usr=5.06%, sys=6.16%, ctx=724, majf=0, minf=1 00:11:49.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:49.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:49.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:49.229 issued rwts: total=7807,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:49.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:49.229 00:11:49.229 Run status group 0 (all jobs): 00:11:49.229 READ: bw=81.7MiB/s (85.7MB/s), 10.5MiB/s-30.3MiB/s (11.0MB/s-31.7MB/s), io=85.5MiB (89.7MB), run=1003-1046msec 00:11:49.229 WRITE: bw=85.5MiB/s (89.6MB/s), 11.5MiB/s-31.7MiB/s (12.0MB/s-33.3MB/s), io=89.4MiB (93.7MB), run=1003-1046msec 00:11:49.229 00:11:49.229 Disk stats (read/write): 00:11:49.229 nvme0n1: ios=3634/4096, merge=0/0, ticks=49423/53596, in_queue=103019, util=92.59% 00:11:49.229 nvme0n2: ios=2307/2560, merge=0/0, ticks=28042/21502, in_queue=49544, util=100.00% 00:11:49.229 nvme0n3: ios=5583/5632, merge=0/0, ticks=53906/49054, in_queue=102960, util=97.05% 00:11:49.229 nvme0n4: ios=6640/6671, merge=0/0, ticks=53328/47931, in_queue=101259, util=97.01% 00:11:49.229 10:00:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:49.229 [global] 00:11:49.229 thread=1 00:11:49.229 invalidate=1 00:11:49.229 rw=randwrite 00:11:49.229 time_based=1 00:11:49.229 runtime=1 00:11:49.229 ioengine=libaio 00:11:49.229 direct=1 00:11:49.229 bs=4096 00:11:49.229 iodepth=128 00:11:49.229 norandommap=0 00:11:49.229 numjobs=1 00:11:49.229 00:11:49.229 verify_dump=1 00:11:49.229 verify_backlog=512 00:11:49.229 verify_state_save=0 00:11:49.229 do_verify=1 00:11:49.229 verify=crc32c-intel 00:11:49.229 [job0] 00:11:49.229 filename=/dev/nvme0n1 00:11:49.229 [job1] 00:11:49.229 filename=/dev/nvme0n2 00:11:49.229 [job2] 00:11:49.229 filename=/dev/nvme0n3 00:11:49.229 [job3] 00:11:49.229 filename=/dev/nvme0n4 00:11:49.229 Could not set queue depth (nvme0n1) 00:11:49.229 Could not set queue depth (nvme0n2) 00:11:49.229 Could not set queue depth (nvme0n3) 00:11:49.229 Could not set queue depth (nvme0n4) 00:11:49.491 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.491 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.491 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.491 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.491 fio-3.35 00:11:49.491 Starting 4 threads 00:11:50.937 00:11:50.937 job0: (groupid=0, jobs=1): err= 0: pid=1177484: Thu Jul 25 10:00:29 2024 00:11:50.937 read: IOPS=8009, BW=31.3MiB/s (32.8MB/s)(31.5MiB/1006msec) 00:11:50.937 slat (nsec): min=907, max=6935.6k, avg=61030.90, stdev=437259.52 00:11:50.937 clat (usec): min=2937, max=20262, avg=8015.39, stdev=2162.28 00:11:50.937 lat (usec): min=3494, max=20264, avg=8076.42, stdev=2191.51 00:11:50.937 clat percentiles (usec): 00:11:50.937 | 1.00th=[ 4686], 5.00th=[ 5342], 10.00th=[ 5735], 20.00th=[ 6390], 00:11:50.937 | 30.00th=[ 6652], 40.00th=[ 6915], 50.00th=[ 7504], 60.00th=[ 8291], 00:11:50.937 | 70.00th=[ 8848], 80.00th=[ 9896], 90.00th=[11076], 95.00th=[12125], 00:11:50.937 | 99.00th=[14877], 99.50th=[16581], 99.90th=[19268], 99.95th=[20317], 00:11:50.937 | 99.99th=[20317] 00:11:50.937 write: IOPS=8143, BW=31.8MiB/s (33.4MB/s)(32.0MiB/1006msec); 0 zone resets 00:11:50.937 slat (nsec): min=1481, max=5801.4k, avg=58460.89, stdev=329544.03 00:11:50.937 clat (usec): min=1434, max=20262, avg=7689.48, stdev=3060.92 00:11:50.937 lat (usec): min=2357, max=20265, avg=7747.94, stdev=3072.97 00:11:50.937 clat percentiles (usec): 00:11:50.937 | 1.00th=[ 2802], 5.00th=[ 3884], 10.00th=[ 4424], 20.00th=[ 5342], 00:11:50.937 | 30.00th=[ 6063], 40.00th=[ 6521], 50.00th=[ 6915], 60.00th=[ 7570], 00:11:50.937 | 70.00th=[ 8455], 80.00th=[ 9765], 90.00th=[11731], 95.00th=[13960], 00:11:50.937 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19006], 99.95th=[19530], 00:11:50.937 | 99.99th=[20317] 00:11:50.938 bw ( KiB/s): min=32760, max=32776, per=35.34%, avg=32768.00, stdev=11.31, samples=2 00:11:50.938 iops : min= 8190, max= 8194, avg=8192.00, stdev= 2.83, samples=2 00:11:50.938 lat (msec) : 2=0.02%, 4=3.55%, 10=78.15%, 20=18.23%, 50=0.04% 00:11:50.938 cpu : usr=3.68%, sys=6.57%, ctx=749, majf=0, minf=1 00:11:50.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:50.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.938 issued rwts: total=8058,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.938 job1: (groupid=0, jobs=1): err= 0: pid=1177485: Thu Jul 25 10:00:29 2024 00:11:50.938 read: IOPS=5315, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1003msec) 00:11:50.938 slat (nsec): min=892, max=45283k, avg=98775.72, stdev=786448.75 00:11:50.938 clat (usec): min=1313, max=57537, avg=12571.29, stdev=6563.74 00:11:50.938 lat (usec): min=4646, max=57552, avg=12670.06, stdev=6601.11 00:11:50.938 clat percentiles (usec): 00:11:50.938 | 1.00th=[ 7963], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:11:50.938 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:11:50.938 | 70.00th=[12256], 80.00th=[12518], 90.00th=[12911], 95.00th=[13698], 00:11:50.938 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54789], 99.95th=[56361], 00:11:50.938 | 99.99th=[57410] 00:11:50.938 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:50.938 slat (nsec): min=1530, max=12871k, avg=81019.61, stdev=416030.59 00:11:50.938 clat (usec): min=6369, max=26388, avg=10587.99, stdev=2428.35 00:11:50.938 lat (usec): min=6376, max=26404, avg=10669.01, stdev=2446.93 00:11:50.938 clat percentiles (usec): 00:11:50.938 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:11:50.938 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10290], 00:11:50.938 | 70.00th=[10945], 80.00th=[11600], 90.00th=[13304], 95.00th=[14484], 00:11:50.938 | 99.00th=[20579], 99.50th=[21365], 99.90th=[21890], 99.95th=[22152], 00:11:50.938 | 99.99th=[26346] 00:11:50.938 bw ( KiB/s): min=20776, max=24280, per=24.30%, avg=22528.00, stdev=2477.70, samples=2 00:11:50.938 iops : min= 5194, max= 6070, avg=5632.00, stdev=619.43, samples=2 00:11:50.938 lat (msec) : 2=0.01%, 10=31.45%, 20=66.70%, 50=0.68%, 100=1.16% 00:11:50.938 cpu : usr=2.99%, sys=2.89%, ctx=780, majf=0, minf=1 00:11:50.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:50.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.938 issued rwts: total=5331,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.938 job2: (groupid=0, jobs=1): err= 0: pid=1177486: Thu Jul 25 10:00:29 2024 00:11:50.938 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:11:50.938 slat (nsec): min=1127, max=16489k, avg=145999.67, stdev=994628.03 00:11:50.938 clat (usec): min=6011, max=63453, avg=20843.56, stdev=10877.11 00:11:50.938 lat (usec): min=6018, max=63480, avg=20989.56, stdev=10980.76 00:11:50.938 clat percentiles (usec): 00:11:50.938 | 1.00th=[ 8356], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[13173], 00:11:50.938 | 30.00th=[14353], 40.00th=[15270], 50.00th=[16319], 60.00th=[17433], 00:11:50.938 | 70.00th=[22938], 80.00th=[28705], 90.00th=[39584], 95.00th=[45351], 00:11:50.938 | 99.00th=[52691], 99.50th=[52691], 99.90th=[54789], 99.95th=[60556], 00:11:50.938 | 99.99th=[63701] 00:11:50.938 write: IOPS=3428, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1011msec); 0 zone resets 00:11:50.938 slat (nsec): min=1567, max=11674k, avg=131387.32, stdev=791162.19 00:11:50.938 clat (usec): min=1282, max=50250, avg=18569.05, stdev=10939.27 00:11:50.938 lat (usec): min=1290, max=50256, avg=18700.44, stdev=11001.96 00:11:50.938 clat percentiles (usec): 00:11:50.938 | 1.00th=[ 3687], 5.00th=[ 6325], 10.00th=[ 8160], 20.00th=[10683], 00:11:50.938 | 30.00th=[11338], 40.00th=[12911], 50.00th=[15533], 60.00th=[16909], 00:11:50.938 | 70.00th=[20317], 80.00th=[26870], 90.00th=[36963], 95.00th=[43254], 00:11:50.938 | 99.00th=[49546], 99.50th=[49546], 99.90th=[50070], 99.95th=[50070], 00:11:50.938 | 99.99th=[50070] 00:11:50.938 bw ( KiB/s): min=12856, max=13848, per=14.40%, avg=13352.00, stdev=701.45, samples=2 00:11:50.938 iops : min= 3214, max= 3462, avg=3338.00, stdev=175.36, samples=2 00:11:50.938 lat (msec) : 2=0.09%, 4=0.61%, 10=11.04%, 20=55.61%, 50=31.78% 00:11:50.938 lat (msec) : 100=0.86% 00:11:50.938 cpu : usr=2.77%, sys=3.27%, ctx=293, majf=0, minf=2 00:11:50.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:11:50.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.938 issued rwts: total=3072,3466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.938 job3: (groupid=0, jobs=1): err= 0: pid=1177487: Thu Jul 25 10:00:29 2024 00:11:50.938 read: IOPS=5801, BW=22.7MiB/s (23.8MB/s)(22.9MiB/1009msec) 00:11:50.938 slat (nsec): min=923, max=11567k, avg=83598.62, stdev=615921.14 00:11:50.938 clat (usec): min=4136, max=35409, avg=11598.81, stdev=4772.99 00:11:50.938 lat (usec): min=4138, max=35436, avg=11682.40, stdev=4808.76 00:11:50.938 clat percentiles (usec): 00:11:50.938 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6915], 20.00th=[ 7701], 00:11:50.938 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9896], 60.00th=[11338], 00:11:50.938 | 70.00th=[13304], 80.00th=[15926], 90.00th=[18482], 95.00th=[20317], 00:11:50.938 | 99.00th=[26346], 99.50th=[27657], 99.90th=[27657], 99.95th=[27657], 00:11:50.938 | 99.99th=[35390] 00:11:50.938 write: IOPS=6089, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1009msec); 0 zone resets 00:11:50.938 slat (nsec): min=1589, max=12498k, avg=79347.69, stdev=488548.08 00:11:50.938 clat (usec): min=1362, max=27910, avg=9669.77, stdev=4166.02 00:11:50.938 lat (usec): min=1370, max=27940, avg=9749.12, stdev=4181.40 00:11:50.938 clat percentiles (usec): 00:11:50.938 | 1.00th=[ 3294], 5.00th=[ 4883], 10.00th=[ 5669], 20.00th=[ 6587], 00:11:50.938 | 30.00th=[ 7308], 40.00th=[ 7635], 50.00th=[ 8225], 60.00th=[ 9634], 00:11:50.938 | 70.00th=[10945], 80.00th=[12518], 90.00th=[15533], 95.00th=[17433], 00:11:50.938 | 99.00th=[23987], 99.50th=[25560], 99.90th=[27132], 99.95th=[27132], 00:11:50.938 | 99.99th=[27919] 00:11:50.938 bw ( KiB/s): min=22136, max=27016, per=26.51%, avg=24576.00, stdev=3450.68, samples=2 00:11:50.938 iops : min= 5534, max= 6754, avg=6144.00, stdev=862.67, samples=2 00:11:50.938 lat (msec) : 2=0.03%, 4=0.89%, 10=55.58%, 20=38.99%, 50=4.50% 00:11:50.938 cpu : usr=2.78%, sys=5.36%, ctx=575, majf=0, minf=1 00:11:50.938 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:50.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:50.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:50.938 issued rwts: total=5854,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:50.938 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:50.938 00:11:50.938 Run status group 0 (all jobs): 00:11:50.938 READ: bw=86.2MiB/s (90.4MB/s), 11.9MiB/s-31.3MiB/s (12.4MB/s-32.8MB/s), io=87.2MiB (91.4MB), run=1003-1011msec 00:11:50.938 WRITE: bw=90.5MiB/s (94.9MB/s), 13.4MiB/s-31.8MiB/s (14.0MB/s-33.4MB/s), io=91.5MiB (96.0MB), run=1003-1011msec 00:11:50.938 00:11:50.938 Disk stats (read/write): 00:11:50.938 nvme0n1: ios=6706/7168, merge=0/0, ticks=51999/52141, in_queue=104140, util=89.78% 00:11:50.938 nvme0n2: ios=4572/4608, merge=0/0, ticks=20202/16441, in_queue=36643, util=99.19% 00:11:50.938 nvme0n3: ios=2587/2836, merge=0/0, ticks=34193/34272, in_queue=68465, util=89.99% 00:11:50.938 nvme0n4: ios=4963/5120, merge=0/0, ticks=53413/47179, in_queue=100592, util=99.05% 00:11:50.938 10:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:50.938 10:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1177715 00:11:50.938 10:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:50.938 10:00:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:50.938 [global] 00:11:50.938 thread=1 00:11:50.938 invalidate=1 00:11:50.938 rw=read 00:11:50.938 time_based=1 00:11:50.938 runtime=10 00:11:50.938 ioengine=libaio 00:11:50.938 direct=1 00:11:50.938 bs=4096 00:11:50.938 iodepth=1 00:11:50.938 norandommap=1 00:11:50.938 numjobs=1 00:11:50.938 00:11:50.938 [job0] 00:11:50.938 filename=/dev/nvme0n1 00:11:50.938 [job1] 00:11:50.938 filename=/dev/nvme0n2 00:11:50.938 [job2] 00:11:50.938 filename=/dev/nvme0n3 00:11:50.938 [job3] 00:11:50.938 filename=/dev/nvme0n4 00:11:50.938 Could not set queue depth (nvme0n1) 00:11:50.938 Could not set queue depth (nvme0n2) 00:11:50.938 Could not set queue depth (nvme0n3) 00:11:50.938 Could not set queue depth (nvme0n4) 00:11:51.203 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.203 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.203 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.203 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:51.203 fio-3.35 00:11:51.203 Starting 4 threads 00:11:53.753 10:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:53.753 10:00:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:53.754 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=262144, buflen=4096 00:11:53.754 fio: pid=1178017, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:54.016 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5476352, buflen=4096 00:11:54.016 fio: pid=1178016, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:54.016 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.016 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:54.277 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.277 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:54.277 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=294912, buflen=4096 00:11:54.277 fio: pid=1178014, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:54.277 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=7270400, buflen=4096 00:11:54.277 fio: pid=1178015, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:54.277 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.277 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:54.277 00:11:54.277 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1178014: Thu Jul 25 10:00:33 2024 00:11:54.277 read: IOPS=24, BW=97.8KiB/s (100kB/s)(288KiB/2946msec) 00:11:54.277 slat (usec): min=8, max=6863, avg=198.08, stdev=1043.19 00:11:54.277 clat (usec): min=540, max=41457, avg=40418.55, stdev=4766.54 00:11:54.277 lat (usec): min=574, max=48006, avg=40619.03, stdev=4906.78 00:11:54.277 clat percentiles (usec): 00:11:54.277 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:54.277 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:54.277 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:54.277 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:11:54.277 | 99.99th=[41681] 00:11:54.277 bw ( KiB/s): min= 96, max= 104, per=2.37%, avg=99.20, stdev= 4.38, samples=5 00:11:54.277 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:11:54.277 lat (usec) : 750=1.37% 00:11:54.277 lat (msec) : 50=97.26% 00:11:54.277 cpu : usr=0.10%, sys=0.00%, ctx=75, majf=0, minf=1 00:11:54.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.277 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.277 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.277 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1178015: Thu Jul 25 10:00:33 2024 00:11:54.277 read: IOPS=571, BW=2284KiB/s (2339kB/s)(7100KiB/3109msec) 00:11:54.277 slat (usec): min=7, max=12841, avg=38.91, stdev=422.67 00:11:54.277 clat (usec): min=964, max=42996, avg=1690.62, stdev=3362.50 00:11:54.277 lat (usec): min=989, max=43021, avg=1729.54, stdev=3388.85 00:11:54.277 clat percentiles (usec): 00:11:54.277 | 1.00th=[ 1074], 5.00th=[ 1139], 10.00th=[ 1172], 20.00th=[ 1205], 00:11:54.277 | 30.00th=[ 1237], 40.00th=[ 1319], 50.00th=[ 1418], 60.00th=[ 1483], 00:11:54.277 | 70.00th=[ 1549], 80.00th=[ 1598], 90.00th=[ 1663], 95.00th=[ 1680], 00:11:54.277 | 99.00th=[ 1778], 99.50th=[42206], 99.90th=[42730], 99.95th=[43254], 00:11:54.277 | 99.99th=[43254] 00:11:54.277 bw ( KiB/s): min= 768, max= 3224, per=55.35%, avg=2313.17, stdev=870.19, samples=6 00:11:54.277 iops : min= 192, max= 806, avg=578.17, stdev=217.61, samples=6 00:11:54.277 lat (usec) : 1000=0.06% 00:11:54.277 lat (msec) : 2=99.04%, 10=0.11%, 20=0.06%, 50=0.68% 00:11:54.277 cpu : usr=0.80%, sys=1.48%, ctx=1779, majf=0, minf=1 00:11:54.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.277 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.277 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.277 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1178016: Thu Jul 25 10:00:33 2024 00:11:54.277 read: IOPS=484, BW=1938KiB/s (1985kB/s)(5348KiB/2759msec) 00:11:54.277 slat (usec): min=8, max=20475, avg=45.13, stdev=588.99 00:11:54.277 clat (usec): min=734, max=43051, avg=1995.47, stdev=5462.27 00:11:54.277 lat (usec): min=759, max=43075, avg=2040.61, stdev=5490.91 00:11:54.277 clat percentiles (usec): 00:11:54.277 | 1.00th=[ 1057], 5.00th=[ 1156], 10.00th=[ 1188], 20.00th=[ 1221], 00:11:54.277 | 30.00th=[ 1237], 40.00th=[ 1254], 50.00th=[ 1270], 60.00th=[ 1270], 00:11:54.277 | 70.00th=[ 1287], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1352], 00:11:54.277 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:11:54.277 | 99.99th=[43254] 00:11:54.277 bw ( KiB/s): min= 96, max= 3128, per=44.94%, avg=1878.40, stdev=1411.15, samples=5 00:11:54.277 iops : min= 24, max= 782, avg=469.60, stdev=352.79, samples=5 00:11:54.277 lat (usec) : 750=0.07%, 1000=0.30% 00:11:54.277 lat (msec) : 2=97.76%, 50=1.79% 00:11:54.277 cpu : usr=0.47%, sys=1.45%, ctx=1341, majf=0, minf=1 00:11:54.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.277 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.277 issued rwts: total=1338,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.277 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1178017: Thu Jul 25 10:00:33 2024 00:11:54.277 read: IOPS=24, BW=98.0KiB/s (100kB/s)(256KiB/2612msec) 00:11:54.277 slat (nsec): min=10165, max=69431, avg=18796.28, stdev=9675.96 00:11:54.277 clat (usec): min=876, max=41976, avg=40464.53, stdev=5035.65 00:11:54.277 lat (usec): min=945, max=42001, avg=40483.22, stdev=5029.33 00:11:54.277 clat percentiles (usec): 00:11:54.277 | 1.00th=[ 873], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:54.277 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:54.277 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:11:54.277 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:54.277 | 99.99th=[42206] 00:11:54.277 bw ( KiB/s): min= 96, max= 104, per=2.32%, avg=97.60, stdev= 3.58, samples=5 00:11:54.278 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:11:54.278 lat (usec) : 1000=1.54% 00:11:54.278 lat (msec) : 50=96.92% 00:11:54.278 cpu : usr=0.11%, sys=0.00%, ctx=66, majf=0, minf=2 00:11:54.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:54.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.278 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:54.278 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:54.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:54.278 00:11:54.278 Run status group 0 (all jobs): 00:11:54.278 READ: bw=4179KiB/s (4279kB/s), 97.8KiB/s-2284KiB/s (100kB/s-2339kB/s), io=12.7MiB (13.3MB), run=2612-3109msec 00:11:54.278 00:11:54.278 Disk stats (read/write): 00:11:54.278 nvme0n1: ios=69/0, merge=0/0, ticks=2789/0, in_queue=2789, util=94.39% 00:11:54.278 nvme0n2: ios=1775/0, merge=0/0, ticks=2939/0, in_queue=2939, util=94.89% 00:11:54.278 nvme0n3: ios=1240/0, merge=0/0, ticks=2501/0, in_queue=2501, util=96.03% 00:11:54.278 nvme0n4: ios=63/0, merge=0/0, ticks=2549/0, in_queue=2549, util=96.42% 00:11:54.538 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.538 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:54.799 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.799 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:54.799 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:54.799 10:00:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:55.060 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:55.060 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1177715 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:55.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:55.320 nvmf hotplug test: fio failed as expected 00:11:55.320 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.580 rmmod nvme_tcp 00:11:55.580 rmmod nvme_fabrics 00:11:55.580 rmmod nvme_keyring 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1174319 ']' 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1174319 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1174319 ']' 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1174319 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1174319 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1174319' 00:11:55.580 killing process with pid 1174319 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1174319 00:11:55.580 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1174319 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.840 10:00:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.754 00:11:57.754 real 0m27.782s 00:11:57.754 user 2m29.046s 00:11:57.754 sys 0m8.797s 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.754 ************************************ 00:11:57.754 END TEST nvmf_fio_target 00:11:57.754 ************************************ 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:57.754 10:00:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:58.015 ************************************ 00:11:58.015 START TEST nvmf_bdevio 00:11:58.015 ************************************ 00:11:58.015 10:00:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:58.015 * Looking for test storage... 00:11:58.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.015 10:00:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:04.644 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:04.644 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:04.644 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:04.644 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.644 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.905 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.905 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.905 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:04.905 10:00:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.905 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.905 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.166 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.681 ms 00:12:05.166 00:12:05.166 --- 10.0.0.2 ping statistics --- 00:12:05.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.167 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:12:05.167 00:12:05.167 --- 10.0.0.1 ping statistics --- 00:12:05.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.167 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1183038 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1183038 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1183038 ']' 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:05.167 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:05.167 [2024-07-25 10:00:44.138427] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:05.167 [2024-07-25 10:00:44.138477] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.167 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.167 [2024-07-25 10:00:44.217794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.428 [2024-07-25 10:00:44.316100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.428 [2024-07-25 10:00:44.316161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.428 [2024-07-25 10:00:44.316169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.428 [2024-07-25 10:00:44.316176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.428 [2024-07-25 10:00:44.316183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.428 [2024-07-25 10:00:44.316351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:12:05.428 [2024-07-25 10:00:44.316616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:12:05.428 [2024-07-25 10:00:44.316775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:12:05.428 [2024-07-25 10:00:44.316778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.001 10:00:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.001 [2024-07-25 10:00:44.992778] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.001 Malloc0 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.001 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:06.002 [2024-07-25 10:00:45.058490] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:06.002 { 00:12:06.002 "params": { 00:12:06.002 "name": "Nvme$subsystem", 00:12:06.002 "trtype": "$TEST_TRANSPORT", 00:12:06.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.002 "adrfam": "ipv4", 00:12:06.002 "trsvcid": "$NVMF_PORT", 00:12:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.002 "hdgst": ${hdgst:-false}, 00:12:06.002 "ddgst": ${ddgst:-false} 00:12:06.002 }, 00:12:06.002 "method": "bdev_nvme_attach_controller" 00:12:06.002 } 00:12:06.002 EOF 00:12:06.002 )") 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:12:06.002 10:00:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:06.002 "params": { 00:12:06.002 "name": "Nvme1", 00:12:06.002 "trtype": "tcp", 00:12:06.002 "traddr": "10.0.0.2", 00:12:06.002 "adrfam": "ipv4", 00:12:06.002 "trsvcid": "4420", 00:12:06.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.002 "hdgst": false, 00:12:06.002 "ddgst": false 00:12:06.002 }, 00:12:06.002 "method": "bdev_nvme_attach_controller" 00:12:06.002 }' 00:12:06.002 [2024-07-25 10:00:45.090745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:06.002 [2024-07-25 10:00:45.090794] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1183244 ] 00:12:06.002 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.262 [2024-07-25 10:00:45.145515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:06.262 [2024-07-25 10:00:45.215049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.262 [2024-07-25 10:00:45.215166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.262 [2024-07-25 10:00:45.215168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.262 I/O targets: 00:12:06.262 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:06.262 00:12:06.262 00:12:06.263 CUnit - A unit testing framework for C - Version 2.1-3 00:12:06.263 http://cunit.sourceforge.net/ 00:12:06.263 00:12:06.263 00:12:06.263 Suite: bdevio tests on: Nvme1n1 00:12:06.523 Test: blockdev write read block ...passed 00:12:06.523 Test: blockdev write zeroes read block ...passed 00:12:06.523 Test: blockdev write zeroes read no split ...passed 00:12:06.523 Test: blockdev write zeroes read split ...passed 00:12:06.523 Test: blockdev write zeroes read split partial ...passed 00:12:06.523 Test: blockdev reset ...[2024-07-25 10:00:45.509445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:06.523 [2024-07-25 10:00:45.509506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1112ce0 (9): Bad file descriptor 00:12:06.523 [2024-07-25 10:00:45.618548] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:06.523 passed 00:12:06.523 Test: blockdev write read 8 blocks ...passed 00:12:06.523 Test: blockdev write read size > 128k ...passed 00:12:06.523 Test: blockdev write read invalid size ...passed 00:12:06.784 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:06.784 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:06.784 Test: blockdev write read max offset ...passed 00:12:06.784 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:06.784 Test: blockdev writev readv 8 blocks ...passed 00:12:06.784 Test: blockdev writev readv 30 x 1block ...passed 00:12:06.784 Test: blockdev writev readv block ...passed 00:12:06.784 Test: blockdev writev readv size > 128k ...passed 00:12:06.784 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:06.784 Test: blockdev comparev and writev ...[2024-07-25 10:00:45.846822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.784 [2024-07-25 10:00:45.846847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:06.784 [2024-07-25 10:00:45.846858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.784 [2024-07-25 10:00:45.846863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:06.784 [2024-07-25 10:00:45.847407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.784 [2024-07-25 10:00:45.847415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:06.784 [2024-07-25 10:00:45.847425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.784 [2024-07-25 10:00:45.847434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:06.785 [2024-07-25 10:00:45.847938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.785 [2024-07-25 10:00:45.847945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:06.785 [2024-07-25 10:00:45.847954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.785 [2024-07-25 10:00:45.847960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:06.785 [2024-07-25 10:00:45.848506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.785 [2024-07-25 10:00:45.848513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:06.785 [2024-07-25 10:00:45.848522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:06.785 [2024-07-25 10:00:45.848527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:06.785 passed 00:12:07.046 Test: blockdev nvme passthru rw ...passed 00:12:07.046 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:00:45.932909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.046 [2024-07-25 10:00:45.932918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:07.046 [2024-07-25 10:00:45.933299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.046 [2024-07-25 10:00:45.933306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:07.046 [2024-07-25 10:00:45.933689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.046 [2024-07-25 10:00:45.933696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:07.046 [2024-07-25 10:00:45.934086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:07.046 [2024-07-25 10:00:45.934093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:07.046 passed 00:12:07.046 Test: blockdev nvme admin passthru ...passed 00:12:07.046 Test: blockdev copy ...passed 00:12:07.046 00:12:07.046 Run Summary: Type Total Ran Passed Failed Inactive 00:12:07.046 suites 1 1 n/a 0 0 00:12:07.046 tests 23 23 23 0 0 00:12:07.046 asserts 152 152 152 0 n/a 00:12:07.046 00:12:07.046 Elapsed time = 1.268 seconds 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:07.046 rmmod nvme_tcp 00:12:07.046 rmmod nvme_fabrics 00:12:07.046 rmmod nvme_keyring 00:12:07.046 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1183038 ']' 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1183038 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1183038 ']' 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1183038 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1183038 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1183038' 00:12:07.307 killing process with pid 1183038 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1183038 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1183038 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.307 10:00:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.863 00:12:09.863 real 0m11.538s 00:12:09.863 user 0m12.252s 00:12:09.863 sys 0m5.756s 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:09.863 ************************************ 00:12:09.863 END TEST nvmf_bdevio 00:12:09.863 ************************************ 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:09.863 00:12:09.863 real 4m53.981s 00:12:09.863 user 11m27.612s 00:12:09.863 sys 1m41.817s 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:09.863 ************************************ 00:12:09.863 END TEST nvmf_target_core 00:12:09.863 ************************************ 00:12:09.863 10:00:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:09.863 10:00:48 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:09.863 10:00:48 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.863 10:00:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:09.863 ************************************ 00:12:09.863 START TEST nvmf_target_extra 00:12:09.863 ************************************ 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:09.863 * Looking for test storage... 00:12:09.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.863 10:00:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.864 ************************************ 00:12:09.864 START TEST nvmf_example 00:12:09.864 ************************************ 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:09.864 * Looking for test storage... 00:12:09.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.864 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.865 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.865 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:09.865 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:09.865 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.865 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:18.043 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.043 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:18.044 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:18.044 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:18.044 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:18.044 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:18.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:18.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.585 ms 00:12:18.044 00:12:18.044 --- 10.0.0.2 ping statistics --- 00:12:18.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.044 rtt min/avg/max/mdev = 0.585/0.585/0.585/0.000 ms 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:18.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:18.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:12:18.044 00:12:18.044 --- 10.0.0.1 ping statistics --- 00:12:18.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:18.044 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1187788 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1187788 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1187788 ']' 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.044 10:00:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:18.044 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:18.045 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:18.045 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.279 Initializing NVMe Controllers 00:12:30.279 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:30.279 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:30.279 Initialization complete. Launching workers. 00:12:30.279 ======================================================== 00:12:30.279 Latency(us) 00:12:30.279 Device Information : IOPS MiB/s Average min max 00:12:30.279 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14997.69 58.58 4266.96 894.83 15419.44 00:12:30.279 ======================================================== 00:12:30.279 Total : 14997.69 58.58 4266.96 894.83 15419.44 00:12:30.279 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:30.279 rmmod nvme_tcp 00:12:30.279 rmmod nvme_fabrics 00:12:30.279 rmmod nvme_keyring 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1187788 ']' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1187788 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1187788 ']' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1187788 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1187788 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1187788' 00:12:30.279 killing process with pid 1187788 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1187788 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1187788 00:12:30.279 nvmf threads initialize successfully 00:12:30.279 bdev subsystem init successfully 00:12:30.279 created a nvmf target service 00:12:30.279 create targets's poll groups done 00:12:30.279 all subsystems of target started 00:12:30.279 nvmf target is running 00:12:30.279 all subsystems of target stopped 00:12:30.279 destroy targets's poll groups done 00:12:30.279 destroyed the nvmf target service 00:12:30.279 bdev subsystem finish successfully 00:12:30.279 nvmf threads destroy successfully 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.279 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 00:12:30.538 real 0m20.860s 00:12:30.538 user 0m46.241s 00:12:30.538 sys 0m6.440s 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:30.538 ************************************ 00:12:30.538 END TEST nvmf_example 00:12:30.538 ************************************ 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:30.538 10:01:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.800 ************************************ 00:12:30.800 START TEST nvmf_filesystem 00:12:30.800 ************************************ 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:30.800 * Looking for test storage... 00:12:30.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:12:30.800 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:30.801 #define SPDK_CONFIG_H 00:12:30.801 #define SPDK_CONFIG_APPS 1 00:12:30.801 #define SPDK_CONFIG_ARCH native 00:12:30.801 #undef SPDK_CONFIG_ASAN 00:12:30.801 #undef SPDK_CONFIG_AVAHI 00:12:30.801 #undef SPDK_CONFIG_CET 00:12:30.801 #define SPDK_CONFIG_COVERAGE 1 00:12:30.801 #define SPDK_CONFIG_CROSS_PREFIX 00:12:30.801 #undef SPDK_CONFIG_CRYPTO 00:12:30.801 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:30.801 #undef SPDK_CONFIG_CUSTOMOCF 00:12:30.801 #undef SPDK_CONFIG_DAOS 00:12:30.801 #define SPDK_CONFIG_DAOS_DIR 00:12:30.801 #define SPDK_CONFIG_DEBUG 1 00:12:30.801 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:30.801 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:30.801 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:30.801 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:30.801 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:30.801 #undef SPDK_CONFIG_DPDK_UADK 00:12:30.801 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:30.801 #define SPDK_CONFIG_EXAMPLES 1 00:12:30.801 #undef SPDK_CONFIG_FC 00:12:30.801 #define SPDK_CONFIG_FC_PATH 00:12:30.801 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:30.801 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:30.801 #undef SPDK_CONFIG_FUSE 00:12:30.801 #undef SPDK_CONFIG_FUZZER 00:12:30.801 #define SPDK_CONFIG_FUZZER_LIB 00:12:30.801 #undef SPDK_CONFIG_GOLANG 00:12:30.801 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:30.801 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:30.801 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:30.801 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:30.801 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:30.801 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:30.801 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:30.801 #define SPDK_CONFIG_IDXD 1 00:12:30.801 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:30.801 #undef SPDK_CONFIG_IPSEC_MB 00:12:30.801 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:30.801 #define SPDK_CONFIG_ISAL 1 00:12:30.801 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:30.801 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:30.801 #define SPDK_CONFIG_LIBDIR 00:12:30.801 #undef SPDK_CONFIG_LTO 00:12:30.801 #define SPDK_CONFIG_MAX_LCORES 128 00:12:30.801 #define SPDK_CONFIG_NVME_CUSE 1 00:12:30.801 #undef SPDK_CONFIG_OCF 00:12:30.801 #define SPDK_CONFIG_OCF_PATH 00:12:30.801 #define SPDK_CONFIG_OPENSSL_PATH 00:12:30.801 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:30.801 #define SPDK_CONFIG_PGO_DIR 00:12:30.801 #undef SPDK_CONFIG_PGO_USE 00:12:30.801 #define SPDK_CONFIG_PREFIX /usr/local 00:12:30.801 #undef SPDK_CONFIG_RAID5F 00:12:30.801 #undef SPDK_CONFIG_RBD 00:12:30.801 #define SPDK_CONFIG_RDMA 1 00:12:30.801 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:30.801 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:30.801 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:30.801 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:30.801 #define SPDK_CONFIG_SHARED 1 00:12:30.801 #undef SPDK_CONFIG_SMA 00:12:30.801 #define SPDK_CONFIG_TESTS 1 00:12:30.801 #undef SPDK_CONFIG_TSAN 00:12:30.801 #define SPDK_CONFIG_UBLK 1 00:12:30.801 #define SPDK_CONFIG_UBSAN 1 00:12:30.801 #undef SPDK_CONFIG_UNIT_TESTS 00:12:30.801 #undef SPDK_CONFIG_URING 00:12:30.801 #define SPDK_CONFIG_URING_PATH 00:12:30.801 #undef SPDK_CONFIG_URING_ZNS 00:12:30.801 #undef SPDK_CONFIG_USDT 00:12:30.801 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:30.801 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:30.801 #define SPDK_CONFIG_VFIO_USER 1 00:12:30.801 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:30.801 #define SPDK_CONFIG_VHOST 1 00:12:30.801 #define SPDK_CONFIG_VIRTIO 1 00:12:30.801 #undef SPDK_CONFIG_VTUNE 00:12:30.801 #define SPDK_CONFIG_VTUNE_DIR 00:12:30.801 #define SPDK_CONFIG_WERROR 1 00:12:30.801 #define SPDK_CONFIG_WPDK_DIR 00:12:30.801 #undef SPDK_CONFIG_XNVME 00:12:30.801 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:30.801 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:30.802 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:12:30.803 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j144 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 1190572 ]] 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 1190572 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.epPPhQ 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.epPPhQ/tests/target /tmp/spdk.epPPhQ 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=954236928 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330192896 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=118609215488 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=129370976256 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10761760768 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64623304704 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685486080 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=25850851328 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=25874198528 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23347200 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=efivarfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=efivarfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=216064 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=507904 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=287744 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=64683810816 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=64685490176 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=1679360 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12937093120 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12937097216 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:12:30.804 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:12:30.805 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:12:30.805 * Looking for test storage... 00:12:30.805 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:12:30.805 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=118609215488 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=12976353280 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.065 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:31.066 10:01:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.199 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.200 10:01:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:39.200 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:39.200 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:39.200 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:39.200 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:12:39.200 00:12:39.200 --- 10.0.0.2 ping statistics --- 00:12:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.200 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:12:39.200 00:12:39.200 --- 10.0.0.1 ping statistics --- 00:12:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.200 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:39.200 ************************************ 00:12:39.200 START TEST nvmf_filesystem_no_in_capsule 00:12:39.200 ************************************ 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:39.200 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1194192 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1194192 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1194192 ']' 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:39.201 10:01:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.201 [2024-07-25 10:01:17.484569] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:39.201 [2024-07-25 10:01:17.484628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.201 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.201 [2024-07-25 10:01:17.556158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.201 [2024-07-25 10:01:17.633465] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.201 [2024-07-25 10:01:17.633509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.201 [2024-07-25 10:01:17.633517] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.201 [2024-07-25 10:01:17.633523] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.201 [2024-07-25 10:01:17.633529] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.201 [2024-07-25 10:01:17.633672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.201 [2024-07-25 10:01:17.633789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.201 [2024-07-25 10:01:17.633947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.201 [2024-07-25 10:01:17.633948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.201 [2024-07-25 10:01:18.314251] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.201 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 Malloc1 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 [2024-07-25 10:01:18.444306] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:39.462 { 00:12:39.462 "name": "Malloc1", 00:12:39.462 "aliases": [ 00:12:39.462 "64b6503b-6ce3-4ae8-8249-d5c3b374e8d0" 00:12:39.462 ], 00:12:39.462 "product_name": "Malloc disk", 00:12:39.462 "block_size": 512, 00:12:39.462 "num_blocks": 1048576, 00:12:39.462 "uuid": "64b6503b-6ce3-4ae8-8249-d5c3b374e8d0", 00:12:39.462 "assigned_rate_limits": { 00:12:39.462 "rw_ios_per_sec": 0, 00:12:39.462 "rw_mbytes_per_sec": 0, 00:12:39.462 "r_mbytes_per_sec": 0, 00:12:39.462 "w_mbytes_per_sec": 0 00:12:39.462 }, 00:12:39.462 "claimed": true, 00:12:39.462 "claim_type": "exclusive_write", 00:12:39.462 "zoned": false, 00:12:39.462 "supported_io_types": { 00:12:39.462 "read": true, 00:12:39.462 "write": true, 00:12:39.462 "unmap": true, 00:12:39.462 "flush": true, 00:12:39.462 "reset": true, 00:12:39.462 "nvme_admin": false, 00:12:39.462 "nvme_io": false, 00:12:39.462 "nvme_io_md": false, 00:12:39.462 "write_zeroes": true, 00:12:39.462 "zcopy": true, 00:12:39.462 "get_zone_info": false, 00:12:39.462 "zone_management": false, 00:12:39.462 "zone_append": false, 00:12:39.462 "compare": false, 00:12:39.462 "compare_and_write": false, 00:12:39.462 "abort": true, 00:12:39.462 "seek_hole": false, 00:12:39.462 "seek_data": false, 00:12:39.462 "copy": true, 00:12:39.462 "nvme_iov_md": false 00:12:39.462 }, 00:12:39.462 "memory_domains": [ 00:12:39.462 { 00:12:39.462 "dma_device_id": "system", 00:12:39.462 "dma_device_type": 1 00:12:39.462 }, 00:12:39.462 { 00:12:39.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.462 "dma_device_type": 2 00:12:39.462 } 00:12:39.462 ], 00:12:39.462 "driver_specific": {} 00:12:39.462 } 00:12:39.462 ]' 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:39.462 10:01:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.376 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.376 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.376 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.376 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.376 10:01:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:43.291 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:43.552 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:43.552 10:01:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.935 ************************************ 00:12:44.935 START TEST filesystem_ext4 00:12:44.935 ************************************ 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:44.935 10:01:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:44.935 mke2fs 1.46.5 (30-Dec-2021) 00:12:44.935 Discarding device blocks: 0/522240 done 00:12:44.935 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:44.935 Filesystem UUID: 60900a9d-c423-434f-aa7e-c90d7092122e 00:12:44.935 Superblock backups stored on blocks: 00:12:44.935 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:44.935 00:12:44.935 Allocating group tables: 0/64 done 00:12:44.935 Writing inode tables: 0/64 done 00:12:47.520 Creating journal (8192 blocks): done 00:12:47.520 Writing superblocks and filesystem accounting information: 0/64 done 00:12:47.520 00:12:47.520 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:47.520 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:47.781 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:48.042 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:48.042 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:48.042 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:48.042 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:48.042 10:01:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1194192 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:48.042 00:12:48.042 real 0m3.295s 00:12:48.042 user 0m0.028s 00:12:48.042 sys 0m0.072s 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:48.042 ************************************ 00:12:48.042 END TEST filesystem_ext4 00:12:48.042 ************************************ 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:48.042 ************************************ 00:12:48.042 START TEST filesystem_btrfs 00:12:48.042 ************************************ 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:48.042 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:48.613 btrfs-progs v6.6.2 00:12:48.613 See https://btrfs.readthedocs.io for more information. 00:12:48.613 00:12:48.613 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:48.613 NOTE: several default settings have changed in version 5.15, please make sure 00:12:48.613 this does not affect your deployments: 00:12:48.613 - DUP for metadata (-m dup) 00:12:48.613 - enabled no-holes (-O no-holes) 00:12:48.613 - enabled free-space-tree (-R free-space-tree) 00:12:48.613 00:12:48.613 Label: (null) 00:12:48.613 UUID: b0711f3d-6635-4fd6-9efc-e2ccb27e1e0a 00:12:48.613 Node size: 16384 00:12:48.613 Sector size: 4096 00:12:48.613 Filesystem size: 510.00MiB 00:12:48.613 Block group profiles: 00:12:48.613 Data: single 8.00MiB 00:12:48.613 Metadata: DUP 32.00MiB 00:12:48.613 System: DUP 8.00MiB 00:12:48.613 SSD detected: yes 00:12:48.613 Zoned device: no 00:12:48.613 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:48.613 Runtime features: free-space-tree 00:12:48.613 Checksum: crc32c 00:12:48.613 Number of devices: 1 00:12:48.613 Devices: 00:12:48.613 ID SIZE PATH 00:12:48.613 1 510.00MiB /dev/nvme0n1p1 00:12:48.613 00:12:48.613 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:48.613 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1194192 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:49.185 00:12:49.185 real 0m1.000s 00:12:49.185 user 0m0.032s 00:12:49.185 sys 0m0.132s 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:49.185 ************************************ 00:12:49.185 END TEST filesystem_btrfs 00:12:49.185 ************************************ 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:49.185 ************************************ 00:12:49.185 START TEST filesystem_xfs 00:12:49.185 ************************************ 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:49.185 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:49.185 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:49.185 = sectsz=512 attr=2, projid32bit=1 00:12:49.185 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:49.185 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:49.185 data = bsize=4096 blocks=130560, imaxpct=25 00:12:49.185 = sunit=0 swidth=0 blks 00:12:49.185 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:49.185 log =internal log bsize=4096 blocks=16384, version=2 00:12:49.185 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:49.185 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:50.572 Discarding blocks...Done. 00:12:50.572 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:50.572 10:01:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1194192 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:52.485 00:12:52.485 real 0m3.121s 00:12:52.485 user 0m0.029s 00:12:52.485 sys 0m0.075s 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:52.485 ************************************ 00:12:52.485 END TEST filesystem_xfs 00:12:52.485 ************************************ 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:52.485 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:52.746 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.746 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.746 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.746 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.746 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1194192 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1194192 ']' 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1194192 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1194192 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1194192' 00:12:53.006 killing process with pid 1194192 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1194192 00:12:53.006 10:01:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1194192 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:53.266 00:12:53.266 real 0m14.779s 00:12:53.266 user 0m58.268s 00:12:53.266 sys 0m1.263s 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.266 ************************************ 00:12:53.266 END TEST nvmf_filesystem_no_in_capsule 00:12:53.266 ************************************ 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:53.266 ************************************ 00:12:53.266 START TEST nvmf_filesystem_in_capsule 00:12:53.266 ************************************ 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1197427 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1197427 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1197427 ']' 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:53.266 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:53.266 [2024-07-25 10:01:32.334193] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:53.266 [2024-07-25 10:01:32.334242] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.266 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.266 [2024-07-25 10:01:32.399486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:53.527 [2024-07-25 10:01:32.464561] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.527 [2024-07-25 10:01:32.464597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.527 [2024-07-25 10:01:32.464604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:53.527 [2024-07-25 10:01:32.464611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:53.527 [2024-07-25 10:01:32.464616] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.527 [2024-07-25 10:01:32.464751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:53.527 [2024-07-25 10:01:32.464884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.527 [2024-07-25 10:01:32.465041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.527 [2024-07-25 10:01:32.465042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.099 [2024-07-25 10:01:33.143181] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.099 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.359 Malloc1 00:12:54.359 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.359 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.359 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.359 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.359 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.359 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.360 [2024-07-25 10:01:33.278057] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:12:54.360 { 00:12:54.360 "name": "Malloc1", 00:12:54.360 "aliases": [ 00:12:54.360 "524605aa-dee1-43be-a3df-32f704c4f97f" 00:12:54.360 ], 00:12:54.360 "product_name": "Malloc disk", 00:12:54.360 "block_size": 512, 00:12:54.360 "num_blocks": 1048576, 00:12:54.360 "uuid": "524605aa-dee1-43be-a3df-32f704c4f97f", 00:12:54.360 "assigned_rate_limits": { 00:12:54.360 "rw_ios_per_sec": 0, 00:12:54.360 "rw_mbytes_per_sec": 0, 00:12:54.360 "r_mbytes_per_sec": 0, 00:12:54.360 "w_mbytes_per_sec": 0 00:12:54.360 }, 00:12:54.360 "claimed": true, 00:12:54.360 "claim_type": "exclusive_write", 00:12:54.360 "zoned": false, 00:12:54.360 "supported_io_types": { 00:12:54.360 "read": true, 00:12:54.360 "write": true, 00:12:54.360 "unmap": true, 00:12:54.360 "flush": true, 00:12:54.360 "reset": true, 00:12:54.360 "nvme_admin": false, 00:12:54.360 "nvme_io": false, 00:12:54.360 "nvme_io_md": false, 00:12:54.360 "write_zeroes": true, 00:12:54.360 "zcopy": true, 00:12:54.360 "get_zone_info": false, 00:12:54.360 "zone_management": false, 00:12:54.360 "zone_append": false, 00:12:54.360 "compare": false, 00:12:54.360 "compare_and_write": false, 00:12:54.360 "abort": true, 00:12:54.360 "seek_hole": false, 00:12:54.360 "seek_data": false, 00:12:54.360 "copy": true, 00:12:54.360 "nvme_iov_md": false 00:12:54.360 }, 00:12:54.360 "memory_domains": [ 00:12:54.360 { 00:12:54.360 "dma_device_id": "system", 00:12:54.360 "dma_device_type": 1 00:12:54.360 }, 00:12:54.360 { 00:12:54.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.360 "dma_device_type": 2 00:12:54.360 } 00:12:54.360 ], 00:12:54.360 "driver_specific": {} 00:12:54.360 } 00:12:54.360 ]' 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:54.360 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.769 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.770 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.770 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.770 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:55.770 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:58.317 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:58.317 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:58.889 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.832 ************************************ 00:12:59.832 START TEST filesystem_in_capsule_ext4 00:12:59.832 ************************************ 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:59.832 10:01:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:59.832 mke2fs 1.46.5 (30-Dec-2021) 00:12:59.832 Discarding device blocks: 0/522240 done 00:12:59.832 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:59.832 Filesystem UUID: 59ca13f4-e99f-4607-90b1-6504934393b7 00:12:59.832 Superblock backups stored on blocks: 00:12:59.832 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:59.832 00:12:59.832 Allocating group tables: 0/64 done 00:12:59.832 Writing inode tables: 0/64 done 00:13:01.746 Creating journal (8192 blocks): done 00:13:02.839 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:13:02.839 00:13:02.839 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:02.839 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:02.839 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1197427 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:03.100 00:13:03.100 real 0m3.310s 00:13:03.100 user 0m0.025s 00:13:03.100 sys 0m0.075s 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:03.100 ************************************ 00:13:03.100 END TEST filesystem_in_capsule_ext4 00:13:03.100 ************************************ 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.100 ************************************ 00:13:03.100 START TEST filesystem_in_capsule_btrfs 00:13:03.100 ************************************ 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:03.100 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:03.361 btrfs-progs v6.6.2 00:13:03.361 See https://btrfs.readthedocs.io for more information. 00:13:03.361 00:13:03.361 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:03.361 NOTE: several default settings have changed in version 5.15, please make sure 00:13:03.361 this does not affect your deployments: 00:13:03.361 - DUP for metadata (-m dup) 00:13:03.361 - enabled no-holes (-O no-holes) 00:13:03.361 - enabled free-space-tree (-R free-space-tree) 00:13:03.361 00:13:03.361 Label: (null) 00:13:03.361 UUID: 5ab5cefb-037a-4508-a382-1036236f9fe5 00:13:03.361 Node size: 16384 00:13:03.361 Sector size: 4096 00:13:03.361 Filesystem size: 510.00MiB 00:13:03.361 Block group profiles: 00:13:03.361 Data: single 8.00MiB 00:13:03.361 Metadata: DUP 32.00MiB 00:13:03.361 System: DUP 8.00MiB 00:13:03.361 SSD detected: yes 00:13:03.361 Zoned device: no 00:13:03.361 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:13:03.361 Runtime features: free-space-tree 00:13:03.361 Checksum: crc32c 00:13:03.361 Number of devices: 1 00:13:03.361 Devices: 00:13:03.361 ID SIZE PATH 00:13:03.361 1 510.00MiB /dev/nvme0n1p1 00:13:03.361 00:13:03.361 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:03.361 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1197427 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:03.950 00:13:03.950 real 0m0.710s 00:13:03.950 user 0m0.032s 00:13:03.950 sys 0m0.130s 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:03.950 ************************************ 00:13:03.950 END TEST filesystem_in_capsule_btrfs 00:13:03.950 ************************************ 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:03.950 ************************************ 00:13:03.950 START TEST filesystem_in_capsule_xfs 00:13:03.950 ************************************ 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:03.950 10:01:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:03.950 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:03.950 = sectsz=512 attr=2, projid32bit=1 00:13:03.950 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:03.950 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:03.950 data = bsize=4096 blocks=130560, imaxpct=25 00:13:03.950 = sunit=0 swidth=0 blks 00:13:03.950 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:03.950 log =internal log bsize=4096 blocks=16384, version=2 00:13:03.950 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:03.950 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:04.915 Discarding blocks...Done. 00:13:04.915 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:04.916 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:06.826 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1197427 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:06.827 00:13:06.827 real 0m2.959s 00:13:06.827 user 0m0.029s 00:13:06.827 sys 0m0.074s 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:06.827 ************************************ 00:13:06.827 END TEST filesystem_in_capsule_xfs 00:13:06.827 ************************************ 00:13:06.827 10:01:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:07.398 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:07.398 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.398 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.398 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:07.398 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1197427 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1197427 ']' 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1197427 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1197427 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1197427' 00:13:07.399 killing process with pid 1197427 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1197427 00:13:07.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1197427 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:07.660 00:13:07.660 real 0m14.418s 00:13:07.660 user 0m56.854s 00:13:07.660 sys 0m1.264s 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:07.660 ************************************ 00:13:07.660 END TEST nvmf_filesystem_in_capsule 00:13:07.660 ************************************ 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:07.660 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:07.660 rmmod nvme_tcp 00:13:07.660 rmmod nvme_fabrics 00:13:07.660 rmmod nvme_keyring 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.921 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:09.835 00:13:09.835 real 0m39.210s 00:13:09.835 user 1m57.353s 00:13:09.835 sys 0m8.223s 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:09.835 ************************************ 00:13:09.835 END TEST nvmf_filesystem 00:13:09.835 ************************************ 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:09.835 10:01:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:10.096 ************************************ 00:13:10.096 START TEST nvmf_target_discovery 00:13:10.096 ************************************ 00:13:10.096 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:10.096 * Looking for test storage... 00:13:10.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.096 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:13:10.097 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:13:18.239 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:18.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:18.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:18.240 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:18.240 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:18.240 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:18.240 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:18.240 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:18.240 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:18.240 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:18.240 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:18.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:18.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:13:18.241 00:13:18.241 --- 10.0.0.2 ping statistics --- 00:13:18.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.241 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:18.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:18.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:13:18.241 00:13:18.241 --- 10.0.0.1 ping statistics --- 00:13:18.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:18.241 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1204640 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1204640 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1204640 ']' 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:18.241 10:01:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 [2024-07-25 10:01:56.290788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:18.241 [2024-07-25 10:01:56.290843] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:18.241 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.241 [2024-07-25 10:01:56.359962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.241 [2024-07-25 10:01:56.428524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.241 [2024-07-25 10:01:56.428563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.241 [2024-07-25 10:01:56.428571] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.241 [2024-07-25 10:01:56.428577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.241 [2024-07-25 10:01:56.428582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.241 [2024-07-25 10:01:56.428703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.241 [2024-07-25 10:01:56.428835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.241 [2024-07-25 10:01:56.428993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.241 [2024-07-25 10:01:56.428994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 [2024-07-25 10:01:57.138265] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 Null1 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 [2024-07-25 10:01:57.198603] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 Null2 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.241 Null3 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:18.241 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 Null4 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.242 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.503 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.503 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:18.503 00:13:18.503 Discovery Log Number of Records 6, Generation counter 6 00:13:18.503 =====Discovery Log Entry 0====== 00:13:18.503 trtype: tcp 00:13:18.503 adrfam: ipv4 00:13:18.503 subtype: current discovery subsystem 00:13:18.503 treq: not required 00:13:18.503 portid: 0 00:13:18.503 trsvcid: 4420 00:13:18.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:18.504 traddr: 10.0.0.2 00:13:18.504 eflags: explicit discovery connections, duplicate discovery information 00:13:18.504 sectype: none 00:13:18.504 =====Discovery Log Entry 1====== 00:13:18.504 trtype: tcp 00:13:18.504 adrfam: ipv4 00:13:18.504 subtype: nvme subsystem 00:13:18.504 treq: not required 00:13:18.504 portid: 0 00:13:18.504 trsvcid: 4420 00:13:18.504 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:18.504 traddr: 10.0.0.2 00:13:18.504 eflags: none 00:13:18.504 sectype: none 00:13:18.504 =====Discovery Log Entry 2====== 00:13:18.504 trtype: tcp 00:13:18.504 adrfam: ipv4 00:13:18.504 subtype: nvme subsystem 00:13:18.504 treq: not required 00:13:18.504 portid: 0 00:13:18.504 trsvcid: 4420 00:13:18.504 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:18.504 traddr: 10.0.0.2 00:13:18.504 eflags: none 00:13:18.504 sectype: none 00:13:18.504 =====Discovery Log Entry 3====== 00:13:18.504 trtype: tcp 00:13:18.504 adrfam: ipv4 00:13:18.504 subtype: nvme subsystem 00:13:18.504 treq: not required 00:13:18.504 portid: 0 00:13:18.504 trsvcid: 4420 00:13:18.504 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:18.504 traddr: 10.0.0.2 00:13:18.504 eflags: none 00:13:18.504 sectype: none 00:13:18.504 =====Discovery Log Entry 4====== 00:13:18.504 trtype: tcp 00:13:18.504 adrfam: ipv4 00:13:18.504 subtype: nvme subsystem 00:13:18.504 treq: not required 00:13:18.504 portid: 0 00:13:18.504 trsvcid: 4420 00:13:18.504 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:18.504 traddr: 10.0.0.2 00:13:18.504 eflags: none 00:13:18.504 sectype: none 00:13:18.504 =====Discovery Log Entry 5====== 00:13:18.504 trtype: tcp 00:13:18.504 adrfam: ipv4 00:13:18.504 subtype: discovery subsystem referral 00:13:18.504 treq: not required 00:13:18.504 portid: 0 00:13:18.504 trsvcid: 4430 00:13:18.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:18.504 traddr: 10.0.0.2 00:13:18.504 eflags: none 00:13:18.504 sectype: none 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:18.504 Perform nvmf subsystem discovery via RPC 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.504 [ 00:13:18.504 { 00:13:18.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:18.504 "subtype": "Discovery", 00:13:18.504 "listen_addresses": [ 00:13:18.504 { 00:13:18.504 "trtype": "TCP", 00:13:18.504 "adrfam": "IPv4", 00:13:18.504 "traddr": "10.0.0.2", 00:13:18.504 "trsvcid": "4420" 00:13:18.504 } 00:13:18.504 ], 00:13:18.504 "allow_any_host": true, 00:13:18.504 "hosts": [] 00:13:18.504 }, 00:13:18.504 { 00:13:18.504 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:18.504 "subtype": "NVMe", 00:13:18.504 "listen_addresses": [ 00:13:18.504 { 00:13:18.504 "trtype": "TCP", 00:13:18.504 "adrfam": "IPv4", 00:13:18.504 "traddr": "10.0.0.2", 00:13:18.504 "trsvcid": "4420" 00:13:18.504 } 00:13:18.504 ], 00:13:18.504 "allow_any_host": true, 00:13:18.504 "hosts": [], 00:13:18.504 "serial_number": "SPDK00000000000001", 00:13:18.504 "model_number": "SPDK bdev Controller", 00:13:18.504 "max_namespaces": 32, 00:13:18.504 "min_cntlid": 1, 00:13:18.504 "max_cntlid": 65519, 00:13:18.504 "namespaces": [ 00:13:18.504 { 00:13:18.504 "nsid": 1, 00:13:18.504 "bdev_name": "Null1", 00:13:18.504 "name": "Null1", 00:13:18.504 "nguid": "422D372789BF489E845C52610EEACF77", 00:13:18.504 "uuid": "422d3727-89bf-489e-845c-52610eeacf77" 00:13:18.504 } 00:13:18.504 ] 00:13:18.504 }, 00:13:18.504 { 00:13:18.504 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:18.504 "subtype": "NVMe", 00:13:18.504 "listen_addresses": [ 00:13:18.504 { 00:13:18.504 "trtype": "TCP", 00:13:18.504 "adrfam": "IPv4", 00:13:18.504 "traddr": "10.0.0.2", 00:13:18.504 "trsvcid": "4420" 00:13:18.504 } 00:13:18.504 ], 00:13:18.504 "allow_any_host": true, 00:13:18.504 "hosts": [], 00:13:18.504 "serial_number": "SPDK00000000000002", 00:13:18.504 "model_number": "SPDK bdev Controller", 00:13:18.504 "max_namespaces": 32, 00:13:18.504 "min_cntlid": 1, 00:13:18.504 "max_cntlid": 65519, 00:13:18.504 "namespaces": [ 00:13:18.504 { 00:13:18.504 "nsid": 1, 00:13:18.504 "bdev_name": "Null2", 00:13:18.504 "name": "Null2", 00:13:18.504 "nguid": "8C4DD994E9D94CDC8ABA6B7F7D824AFD", 00:13:18.504 "uuid": "8c4dd994-e9d9-4cdc-8aba-6b7f7d824afd" 00:13:18.504 } 00:13:18.504 ] 00:13:18.504 }, 00:13:18.504 { 00:13:18.504 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:18.504 "subtype": "NVMe", 00:13:18.504 "listen_addresses": [ 00:13:18.504 { 00:13:18.504 "trtype": "TCP", 00:13:18.504 "adrfam": "IPv4", 00:13:18.504 "traddr": "10.0.0.2", 00:13:18.504 "trsvcid": "4420" 00:13:18.504 } 00:13:18.504 ], 00:13:18.504 "allow_any_host": true, 00:13:18.504 "hosts": [], 00:13:18.504 "serial_number": "SPDK00000000000003", 00:13:18.504 "model_number": "SPDK bdev Controller", 00:13:18.504 "max_namespaces": 32, 00:13:18.504 "min_cntlid": 1, 00:13:18.504 "max_cntlid": 65519, 00:13:18.504 "namespaces": [ 00:13:18.504 { 00:13:18.504 "nsid": 1, 00:13:18.504 "bdev_name": "Null3", 00:13:18.504 "name": "Null3", 00:13:18.504 "nguid": "024B77E57B2E4D0088AD37E7E42AAC41", 00:13:18.504 "uuid": "024b77e5-7b2e-4d00-88ad-37e7e42aac41" 00:13:18.504 } 00:13:18.504 ] 00:13:18.504 }, 00:13:18.504 { 00:13:18.504 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:18.504 "subtype": "NVMe", 00:13:18.504 "listen_addresses": [ 00:13:18.504 { 00:13:18.504 "trtype": "TCP", 00:13:18.504 "adrfam": "IPv4", 00:13:18.504 "traddr": "10.0.0.2", 00:13:18.504 "trsvcid": "4420" 00:13:18.504 } 00:13:18.504 ], 00:13:18.504 "allow_any_host": true, 00:13:18.504 "hosts": [], 00:13:18.504 "serial_number": "SPDK00000000000004", 00:13:18.504 "model_number": "SPDK bdev Controller", 00:13:18.504 "max_namespaces": 32, 00:13:18.504 "min_cntlid": 1, 00:13:18.504 "max_cntlid": 65519, 00:13:18.504 "namespaces": [ 00:13:18.504 { 00:13:18.504 "nsid": 1, 00:13:18.504 "bdev_name": "Null4", 00:13:18.504 "name": "Null4", 00:13:18.504 "nguid": "25737E2718F84C69ADF4FB5794CD653C", 00:13:18.504 "uuid": "25737e27-18f8-4c69-adf4-fb5794cd653c" 00:13:18.504 } 00:13:18.504 ] 00:13:18.504 } 00:13:18.504 ] 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:18.504 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.505 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:18.766 rmmod nvme_tcp 00:13:18.766 rmmod nvme_fabrics 00:13:18.766 rmmod nvme_keyring 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1204640 ']' 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1204640 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1204640 ']' 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1204640 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1204640 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1204640' 00:13:18.766 killing process with pid 1204640 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1204640 00:13:18.766 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1204640 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.026 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.942 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:20.942 00:13:20.942 real 0m11.032s 00:13:20.942 user 0m8.190s 00:13:20.942 sys 0m5.677s 00:13:20.942 10:01:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.942 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:20.942 ************************************ 00:13:20.942 END TEST nvmf_target_discovery 00:13:20.942 ************************************ 00:13:20.942 10:02:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:20.942 10:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.942 10:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.942 10:02:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.942 ************************************ 00:13:20.942 START TEST nvmf_referrals 00:13:20.942 ************************************ 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:21.204 * Looking for test storage... 00:13:21.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:13:21.204 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:29.347 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:29.347 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:29.347 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:29.347 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:29.347 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:29.348 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:29.348 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.348 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:13:29.348 00:13:29.348 --- 10.0.0.2 ping statistics --- 00:13:29.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.348 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.348 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.348 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:13:29.348 00:13:29.348 --- 10.0.0.1 ping statistics --- 00:13:29.348 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.348 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1209000 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1209000 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1209000 ']' 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.348 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 [2024-07-25 10:02:07.355532] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:29.348 [2024-07-25 10:02:07.355592] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.348 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.348 [2024-07-25 10:02:07.425989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.348 [2024-07-25 10:02:07.500587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.348 [2024-07-25 10:02:07.500627] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.348 [2024-07-25 10:02:07.500636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.348 [2024-07-25 10:02:07.500642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.348 [2024-07-25 10:02:07.500648] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.348 [2024-07-25 10:02:07.500791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.348 [2024-07-25 10:02:07.500914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.348 [2024-07-25 10:02:07.501071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.348 [2024-07-25 10:02:07.501073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 [2024-07-25 10:02:08.186156] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 [2024-07-25 10:02:08.202326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:29.348 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:29.349 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.610 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.872 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:30.133 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.424 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:30.424 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:30.424 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.425 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:30.686 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:30.687 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:30.948 rmmod nvme_tcp 00:13:30.948 rmmod nvme_fabrics 00:13:30.948 rmmod nvme_keyring 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1209000 ']' 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1209000 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1209000 ']' 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1209000 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1209000 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1209000' 00:13:30.948 killing process with pid 1209000 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1209000 00:13:30.948 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1209000 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.948 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.209 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.123 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:33.124 00:13:33.124 real 0m12.081s 00:13:33.124 user 0m13.252s 00:13:33.124 sys 0m5.911s 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:33.124 ************************************ 00:13:33.124 END TEST nvmf_referrals 00:13:33.124 ************************************ 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.124 ************************************ 00:13:33.124 START TEST nvmf_connect_disconnect 00:13:33.124 ************************************ 00:13:33.124 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:33.385 * Looking for test storage... 00:13:33.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.385 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:13:33.386 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:13:40.095 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:40.096 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:40.096 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:40.096 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:40.096 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.096 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.357 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:40.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:13:40.357 00:13:40.357 --- 10.0.0.2 ping statistics --- 00:13:40.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.357 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.358 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.358 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:13:40.358 00:13:40.358 --- 10.0.0.1 ping statistics --- 00:13:40.358 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.358 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1213766 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1213766 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1213766 ']' 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:40.358 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:40.618 [2024-07-25 10:02:19.531780] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:40.618 [2024-07-25 10:02:19.531828] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.618 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.618 [2024-07-25 10:02:19.598576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.618 [2024-07-25 10:02:19.664398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.619 [2024-07-25 10:02:19.664433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.619 [2024-07-25 10:02:19.664440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.619 [2024-07-25 10:02:19.664447] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.619 [2024-07-25 10:02:19.664452] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.619 [2024-07-25 10:02:19.664621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.619 [2024-07-25 10:02:19.664740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.619 [2024-07-25 10:02:19.664895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.619 [2024-07-25 10:02:19.664896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:41.190 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:41.190 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:13:41.190 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:41.190 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:41.190 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.450 [2024-07-25 10:02:20.355182] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:41.450 [2024-07-25 10:02:20.414436] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:41.450 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:45.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:59.753 rmmod nvme_tcp 00:13:59.753 rmmod nvme_fabrics 00:13:59.753 rmmod nvme_keyring 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1213766 ']' 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1213766 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1213766 ']' 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1213766 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:59.753 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1213766 00:14:00.015 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:00.015 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:00.015 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1213766' 00:14:00.015 killing process with pid 1213766 00:14:00.015 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1213766 00:14:00.015 10:02:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1213766 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.015 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:02.572 00:14:02.572 real 0m28.905s 00:14:02.572 user 1m19.380s 00:14:02.572 sys 0m6.395s 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:02.572 ************************************ 00:14:02.572 END TEST nvmf_connect_disconnect 00:14:02.572 ************************************ 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:02.572 ************************************ 00:14:02.572 START TEST nvmf_multitarget 00:14:02.572 ************************************ 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:02.572 * Looking for test storage... 00:14:02.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:02.572 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:14:02.573 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:09.165 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:09.165 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:09.165 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:09.165 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:09.165 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.166 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:14:09.428 00:14:09.428 --- 10.0.0.2 ping statistics --- 00:14:09.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.428 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:14:09.428 00:14:09.428 --- 10.0.0.1 ping statistics --- 00:14:09.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.428 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1221871 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1221871 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1221871 ']' 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:09.428 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:09.689 [2024-07-25 10:02:48.594610] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:09.689 [2024-07-25 10:02:48.594677] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.689 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.689 [2024-07-25 10:02:48.666451] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:09.689 [2024-07-25 10:02:48.741748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.689 [2024-07-25 10:02:48.741786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.689 [2024-07-25 10:02:48.741793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.689 [2024-07-25 10:02:48.741800] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.689 [2024-07-25 10:02:48.741806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.689 [2024-07-25 10:02:48.741868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.689 [2024-07-25 10:02:48.742004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:09.689 [2024-07-25 10:02:48.742163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:09.689 [2024-07-25 10:02:48.742164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:10.262 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:10.262 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:10.262 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:10.263 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:10.263 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:10.523 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.523 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:10.523 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:10.523 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:10.523 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:10.523 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:10.523 "nvmf_tgt_1" 00:14:10.524 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:10.785 "nvmf_tgt_2" 00:14:10.785 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:10.785 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:10.785 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:10.785 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:10.785 true 00:14:10.785 10:02:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:11.047 true 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:11.047 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:11.047 rmmod nvme_tcp 00:14:11.047 rmmod nvme_fabrics 00:14:11.047 rmmod nvme_keyring 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1221871 ']' 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1221871 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1221871 ']' 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1221871 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1221871 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1221871' 00:14:11.308 killing process with pid 1221871 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1221871 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1221871 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.308 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.857 00:14:13.857 real 0m11.236s 00:14:13.857 user 0m9.176s 00:14:13.857 sys 0m5.861s 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:13.857 ************************************ 00:14:13.857 END TEST nvmf_multitarget 00:14:13.857 ************************************ 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.857 ************************************ 00:14:13.857 START TEST nvmf_rpc 00:14:13.857 ************************************ 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:13.857 * Looking for test storage... 00:14:13.857 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.857 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.858 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:20.450 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:20.450 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.450 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:20.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:20.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:20.451 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:20.713 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.713 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.774 ms 00:14:20.713 00:14:20.713 --- 10.0.0.2 ping statistics --- 00:14:20.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.713 rtt min/avg/max/mdev = 0.774/0.774/0.774/0.000 ms 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:20.713 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.713 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:14:20.713 00:14:20.713 --- 10.0.0.1 ping statistics --- 00:14:20.713 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.713 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:20.713 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1226260 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1226260 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1226260 ']' 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.975 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:20.975 [2024-07-25 10:02:59.919096] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:20.975 [2024-07-25 10:02:59.919154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.975 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.975 [2024-07-25 10:02:59.991730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:20.975 [2024-07-25 10:03:00.070974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.975 [2024-07-25 10:03:00.071017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.975 [2024-07-25 10:03:00.071026] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.975 [2024-07-25 10:03:00.071032] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.975 [2024-07-25 10:03:00.071038] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.975 [2024-07-25 10:03:00.071189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.975 [2024-07-25 10:03:00.071309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:20.975 [2024-07-25 10:03:00.071646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:20.975 [2024-07-25 10:03:00.071647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.919 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:21.919 "tick_rate": 2400000000, 00:14:21.919 "poll_groups": [ 00:14:21.919 { 00:14:21.919 "name": "nvmf_tgt_poll_group_000", 00:14:21.919 "admin_qpairs": 0, 00:14:21.919 "io_qpairs": 0, 00:14:21.919 "current_admin_qpairs": 0, 00:14:21.919 "current_io_qpairs": 0, 00:14:21.919 "pending_bdev_io": 0, 00:14:21.919 "completed_nvme_io": 0, 00:14:21.919 "transports": [] 00:14:21.919 }, 00:14:21.919 { 00:14:21.919 "name": "nvmf_tgt_poll_group_001", 00:14:21.919 "admin_qpairs": 0, 00:14:21.919 "io_qpairs": 0, 00:14:21.919 "current_admin_qpairs": 0, 00:14:21.919 "current_io_qpairs": 0, 00:14:21.919 "pending_bdev_io": 0, 00:14:21.919 "completed_nvme_io": 0, 00:14:21.919 "transports": [] 00:14:21.920 }, 00:14:21.920 { 00:14:21.920 "name": "nvmf_tgt_poll_group_002", 00:14:21.920 "admin_qpairs": 0, 00:14:21.920 "io_qpairs": 0, 00:14:21.920 "current_admin_qpairs": 0, 00:14:21.920 "current_io_qpairs": 0, 00:14:21.920 "pending_bdev_io": 0, 00:14:21.920 "completed_nvme_io": 0, 00:14:21.920 "transports": [] 00:14:21.920 }, 00:14:21.920 { 00:14:21.920 "name": "nvmf_tgt_poll_group_003", 00:14:21.920 "admin_qpairs": 0, 00:14:21.920 "io_qpairs": 0, 00:14:21.920 "current_admin_qpairs": 0, 00:14:21.920 "current_io_qpairs": 0, 00:14:21.920 "pending_bdev_io": 0, 00:14:21.920 "completed_nvme_io": 0, 00:14:21.920 "transports": [] 00:14:21.920 } 00:14:21.920 ] 00:14:21.920 }' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 [2024-07-25 10:03:00.870567] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:21.920 "tick_rate": 2400000000, 00:14:21.920 "poll_groups": [ 00:14:21.920 { 00:14:21.920 "name": "nvmf_tgt_poll_group_000", 00:14:21.920 "admin_qpairs": 0, 00:14:21.920 "io_qpairs": 0, 00:14:21.920 "current_admin_qpairs": 0, 00:14:21.920 "current_io_qpairs": 0, 00:14:21.920 "pending_bdev_io": 0, 00:14:21.920 "completed_nvme_io": 0, 00:14:21.920 "transports": [ 00:14:21.920 { 00:14:21.920 "trtype": "TCP" 00:14:21.920 } 00:14:21.920 ] 00:14:21.920 }, 00:14:21.920 { 00:14:21.920 "name": "nvmf_tgt_poll_group_001", 00:14:21.920 "admin_qpairs": 0, 00:14:21.920 "io_qpairs": 0, 00:14:21.920 "current_admin_qpairs": 0, 00:14:21.920 "current_io_qpairs": 0, 00:14:21.920 "pending_bdev_io": 0, 00:14:21.920 "completed_nvme_io": 0, 00:14:21.920 "transports": [ 00:14:21.920 { 00:14:21.920 "trtype": "TCP" 00:14:21.920 } 00:14:21.920 ] 00:14:21.920 }, 00:14:21.920 { 00:14:21.920 "name": "nvmf_tgt_poll_group_002", 00:14:21.920 "admin_qpairs": 0, 00:14:21.920 "io_qpairs": 0, 00:14:21.920 "current_admin_qpairs": 0, 00:14:21.920 "current_io_qpairs": 0, 00:14:21.920 "pending_bdev_io": 0, 00:14:21.920 "completed_nvme_io": 0, 00:14:21.920 "transports": [ 00:14:21.920 { 00:14:21.920 "trtype": "TCP" 00:14:21.920 } 00:14:21.920 ] 00:14:21.920 }, 00:14:21.920 { 00:14:21.920 "name": "nvmf_tgt_poll_group_003", 00:14:21.920 "admin_qpairs": 0, 00:14:21.920 "io_qpairs": 0, 00:14:21.920 "current_admin_qpairs": 0, 00:14:21.920 "current_io_qpairs": 0, 00:14:21.920 "pending_bdev_io": 0, 00:14:21.920 "completed_nvme_io": 0, 00:14:21.920 "transports": [ 00:14:21.920 { 00:14:21.920 "trtype": "TCP" 00:14:21.920 } 00:14:21.920 ] 00:14:21.920 } 00:14:21.920 ] 00:14:21.920 }' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 Malloc1 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.920 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.182 [2024-07-25 10:03:01.058419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:14:22.182 [2024-07-25 10:03:01.085330] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:22.182 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:22.182 could not add new controller: failed to write to nvme-fabrics device 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.182 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.655 10:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:23.655 10:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:23.655 10:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.655 10:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:23.655 10:03:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:25.569 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:25.830 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:25.831 [2024-07-25 10:03:04.813146] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:14:25.831 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:25.831 could not add new controller: failed to write to nvme-fabrics device 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.831 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:27.746 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:27.746 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:27.746 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:27.746 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:27.746 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.658 [2024-07-25 10:03:08.574593] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.658 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:31.042 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:31.042 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:31.042 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:31.042 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:31.042 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:33.584 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 [2024-07-25 10:03:12.251524] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.585 10:03:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.969 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:34.969 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:34.969 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:34.969 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:34.969 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:36.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.883 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.884 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.884 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.884 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.884 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.884 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:36.884 [2024-07-25 10:03:16.016012] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.144 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.145 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:38.532 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:38.532 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:38.532 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:38.532 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:38.532 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:41.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:41.077 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 [2024-07-25 10:03:19.780334] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.078 10:03:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.515 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.515 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.515 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.515 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:42.515 10:03:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:44.437 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:44.437 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 [2024-07-25 10:03:23.540916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.438 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.369 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.369 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:46.369 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.369 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:46.369 10:03:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 [2024-07-25 10:03:27.313810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 [2024-07-25 10:03:27.373933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.283 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.284 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 [2024-07-25 10:03:27.438116] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 [2024-07-25 10:03:27.498307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 [2024-07-25 10:03:27.558504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.545 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:48.545 "tick_rate": 2400000000, 00:14:48.545 "poll_groups": [ 00:14:48.545 { 00:14:48.545 "name": "nvmf_tgt_poll_group_000", 00:14:48.545 "admin_qpairs": 0, 00:14:48.545 "io_qpairs": 224, 00:14:48.546 "current_admin_qpairs": 0, 00:14:48.546 "current_io_qpairs": 0, 00:14:48.546 "pending_bdev_io": 0, 00:14:48.546 "completed_nvme_io": 380, 00:14:48.546 "transports": [ 00:14:48.546 { 00:14:48.546 "trtype": "TCP" 00:14:48.546 } 00:14:48.546 ] 00:14:48.546 }, 00:14:48.546 { 00:14:48.546 "name": "nvmf_tgt_poll_group_001", 00:14:48.546 "admin_qpairs": 1, 00:14:48.546 "io_qpairs": 223, 00:14:48.546 "current_admin_qpairs": 0, 00:14:48.546 "current_io_qpairs": 0, 00:14:48.546 "pending_bdev_io": 0, 00:14:48.546 "completed_nvme_io": 362, 00:14:48.546 "transports": [ 00:14:48.546 { 00:14:48.546 "trtype": "TCP" 00:14:48.546 } 00:14:48.546 ] 00:14:48.546 }, 00:14:48.546 { 00:14:48.546 "name": "nvmf_tgt_poll_group_002", 00:14:48.546 "admin_qpairs": 6, 00:14:48.546 "io_qpairs": 218, 00:14:48.546 "current_admin_qpairs": 0, 00:14:48.546 "current_io_qpairs": 0, 00:14:48.546 "pending_bdev_io": 0, 00:14:48.546 "completed_nvme_io": 224, 00:14:48.546 "transports": [ 00:14:48.546 { 00:14:48.546 "trtype": "TCP" 00:14:48.546 } 00:14:48.546 ] 00:14:48.546 }, 00:14:48.546 { 00:14:48.546 "name": "nvmf_tgt_poll_group_003", 00:14:48.546 "admin_qpairs": 0, 00:14:48.546 "io_qpairs": 224, 00:14:48.546 "current_admin_qpairs": 0, 00:14:48.546 "current_io_qpairs": 0, 00:14:48.546 "pending_bdev_io": 0, 00:14:48.546 "completed_nvme_io": 273, 00:14:48.546 "transports": [ 00:14:48.546 { 00:14:48.546 "trtype": "TCP" 00:14:48.546 } 00:14:48.546 ] 00:14:48.546 } 00:14:48.546 ] 00:14:48.546 }' 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:48.546 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:48.806 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:48.806 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:48.807 rmmod nvme_tcp 00:14:48.807 rmmod nvme_fabrics 00:14:48.807 rmmod nvme_keyring 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1226260 ']' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1226260 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1226260 ']' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1226260 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1226260 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1226260' 00:14:48.807 killing process with pid 1226260 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1226260 00:14:48.807 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1226260 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:49.067 10:03:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:50.982 00:14:50.982 real 0m37.532s 00:14:50.982 user 1m53.590s 00:14:50.982 sys 0m7.231s 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.982 ************************************ 00:14:50.982 END TEST nvmf_rpc 00:14:50.982 ************************************ 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:50.982 10:03:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:51.243 ************************************ 00:14:51.243 START TEST nvmf_invalid 00:14:51.243 ************************************ 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:51.243 * Looking for test storage... 00:14:51.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:51.243 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:14:51.244 10:03:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:59.444 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:59.444 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:59.445 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:59.445 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:59.445 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:59.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:59.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.665 ms 00:14:59.445 00:14:59.445 --- 10.0.0.2 ping statistics --- 00:14:59.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.445 rtt min/avg/max/mdev = 0.665/0.665/0.665/0.000 ms 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:59.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:59.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:14:59.445 00:14:59.445 --- 10.0.0.1 ping statistics --- 00:14:59.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:59.445 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1236657 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1236657 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1236657 ']' 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:59.445 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:59.445 [2024-07-25 10:03:37.614699] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:59.445 [2024-07-25 10:03:37.614765] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:59.445 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.445 [2024-07-25 10:03:37.686320] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:59.445 [2024-07-25 10:03:37.761132] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:59.445 [2024-07-25 10:03:37.761169] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:59.445 [2024-07-25 10:03:37.761177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:59.445 [2024-07-25 10:03:37.761183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:59.445 [2024-07-25 10:03:37.761189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:59.445 [2024-07-25 10:03:37.761262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.445 [2024-07-25 10:03:37.761374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.445 [2024-07-25 10:03:37.761510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:59.445 [2024-07-25 10:03:37.761511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:59.445 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:59.445 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:14:59.446 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:59.446 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:59.446 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:59.446 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:59.446 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:59.446 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode18646 00:14:59.446 [2024-07-25 10:03:38.568415] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:59.707 { 00:14:59.707 "nqn": "nqn.2016-06.io.spdk:cnode18646", 00:14:59.707 "tgt_name": "foobar", 00:14:59.707 "method": "nvmf_create_subsystem", 00:14:59.707 "req_id": 1 00:14:59.707 } 00:14:59.707 Got JSON-RPC error response 00:14:59.707 response: 00:14:59.707 { 00:14:59.707 "code": -32603, 00:14:59.707 "message": "Unable to find target foobar" 00:14:59.707 }' 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:59.707 { 00:14:59.707 "nqn": "nqn.2016-06.io.spdk:cnode18646", 00:14:59.707 "tgt_name": "foobar", 00:14:59.707 "method": "nvmf_create_subsystem", 00:14:59.707 "req_id": 1 00:14:59.707 } 00:14:59.707 Got JSON-RPC error response 00:14:59.707 response: 00:14:59.707 { 00:14:59.707 "code": -32603, 00:14:59.707 "message": "Unable to find target foobar" 00:14:59.707 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode18181 00:14:59.707 [2024-07-25 10:03:38.749015] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18181: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:59.707 { 00:14:59.707 "nqn": "nqn.2016-06.io.spdk:cnode18181", 00:14:59.707 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:59.707 "method": "nvmf_create_subsystem", 00:14:59.707 "req_id": 1 00:14:59.707 } 00:14:59.707 Got JSON-RPC error response 00:14:59.707 response: 00:14:59.707 { 00:14:59.707 "code": -32602, 00:14:59.707 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:59.707 }' 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:59.707 { 00:14:59.707 "nqn": "nqn.2016-06.io.spdk:cnode18181", 00:14:59.707 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:59.707 "method": "nvmf_create_subsystem", 00:14:59.707 "req_id": 1 00:14:59.707 } 00:14:59.707 Got JSON-RPC error response 00:14:59.707 response: 00:14:59.707 { 00:14:59.707 "code": -32602, 00:14:59.707 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:59.707 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:59.707 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode14140 00:14:59.968 [2024-07-25 10:03:38.921615] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14140: invalid model number 'SPDK_Controller' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:59.969 { 00:14:59.969 "nqn": "nqn.2016-06.io.spdk:cnode14140", 00:14:59.969 "model_number": "SPDK_Controller\u001f", 00:14:59.969 "method": "nvmf_create_subsystem", 00:14:59.969 "req_id": 1 00:14:59.969 } 00:14:59.969 Got JSON-RPC error response 00:14:59.969 response: 00:14:59.969 { 00:14:59.969 "code": -32602, 00:14:59.969 "message": "Invalid MN SPDK_Controller\u001f" 00:14:59.969 }' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:59.969 { 00:14:59.969 "nqn": "nqn.2016-06.io.spdk:cnode14140", 00:14:59.969 "model_number": "SPDK_Controller\u001f", 00:14:59.969 "method": "nvmf_create_subsystem", 00:14:59.969 "req_id": 1 00:14:59.969 } 00:14:59.969 Got JSON-RPC error response 00:14:59.969 response: 00:14:59.969 { 00:14:59.969 "code": -32602, 00:14:59.969 "message": "Invalid MN SPDK_Controller\u001f" 00:14:59.969 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.969 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:59.970 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'QM,#6)/SNkcUbU ^9Au#l' 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'QM,#6)/SNkcUbU ^9Au#l' nqn.2016-06.io.spdk:cnode10952 00:15:00.231 [2024-07-25 10:03:39.250658] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10952: invalid serial number 'QM,#6)/SNkcUbU ^9Au#l' 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:00.231 { 00:15:00.231 "nqn": "nqn.2016-06.io.spdk:cnode10952", 00:15:00.231 "serial_number": "QM,#6)/SNkcUbU ^9Au#l", 00:15:00.231 "method": "nvmf_create_subsystem", 00:15:00.231 "req_id": 1 00:15:00.231 } 00:15:00.231 Got JSON-RPC error response 00:15:00.231 response: 00:15:00.231 { 00:15:00.231 "code": -32602, 00:15:00.231 "message": "Invalid SN QM,#6)/SNkcUbU ^9Au#l" 00:15:00.231 }' 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:00.231 { 00:15:00.231 "nqn": "nqn.2016-06.io.spdk:cnode10952", 00:15:00.231 "serial_number": "QM,#6)/SNkcUbU ^9Au#l", 00:15:00.231 "method": "nvmf_create_subsystem", 00:15:00.231 "req_id": 1 00:15:00.231 } 00:15:00.231 Got JSON-RPC error response 00:15:00.231 response: 00:15:00.231 { 00:15:00.231 "code": -32602, 00:15:00.231 "message": "Invalid SN QM,#6)/SNkcUbU ^9Au#l" 00:15:00.231 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:00.231 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.232 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:00.494 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ m == \- ]] 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'm|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmRy7wPU>c' 00:15:00.495 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'm|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmRy7wPU>c' nqn.2016-06.io.spdk:cnode15204 00:15:00.757 [2024-07-25 10:03:39.736190] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15204: invalid model number 'm|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmRy7wPU>c' 00:15:00.757 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:00.757 { 00:15:00.757 "nqn": "nqn.2016-06.io.spdk:cnode15204", 00:15:00.757 "model_number": "m|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmR\u007fy7wPU>c", 00:15:00.757 "method": "nvmf_create_subsystem", 00:15:00.757 "req_id": 1 00:15:00.757 } 00:15:00.757 Got JSON-RPC error response 00:15:00.757 response: 00:15:00.757 { 00:15:00.757 "code": -32602, 00:15:00.757 "message": "Invalid MN m|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmR\u007fy7wPU>c" 00:15:00.757 }' 00:15:00.757 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:00.757 { 00:15:00.757 "nqn": "nqn.2016-06.io.spdk:cnode15204", 00:15:00.757 "model_number": "m|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmR\u007fy7wPU>c", 00:15:00.757 "method": "nvmf_create_subsystem", 00:15:00.757 "req_id": 1 00:15:00.757 } 00:15:00.757 Got JSON-RPC error response 00:15:00.757 response: 00:15:00.757 { 00:15:00.757 "code": -32602, 00:15:00.757 "message": "Invalid MN m|yX/O1O}DS4r#5l>aq6/+@MRwNd@#SmR\u007fy7wPU>c" 00:15:00.757 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:00.757 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:01.018 [2024-07-25 10:03:39.908833] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.018 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:01.018 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:01.018 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:01.018 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:01.018 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:01.018 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:01.279 [2024-07-25 10:03:40.249916] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:01.279 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:01.279 { 00:15:01.279 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:01.279 "listen_address": { 00:15:01.279 "trtype": "tcp", 00:15:01.279 "traddr": "", 00:15:01.279 "trsvcid": "4421" 00:15:01.279 }, 00:15:01.279 "method": "nvmf_subsystem_remove_listener", 00:15:01.279 "req_id": 1 00:15:01.279 } 00:15:01.279 Got JSON-RPC error response 00:15:01.279 response: 00:15:01.279 { 00:15:01.279 "code": -32602, 00:15:01.279 "message": "Invalid parameters" 00:15:01.279 }' 00:15:01.279 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:01.279 { 00:15:01.279 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:01.279 "listen_address": { 00:15:01.279 "trtype": "tcp", 00:15:01.279 "traddr": "", 00:15:01.279 "trsvcid": "4421" 00:15:01.279 }, 00:15:01.279 "method": "nvmf_subsystem_remove_listener", 00:15:01.279 "req_id": 1 00:15:01.279 } 00:15:01.279 Got JSON-RPC error response 00:15:01.279 response: 00:15:01.279 { 00:15:01.279 "code": -32602, 00:15:01.279 "message": "Invalid parameters" 00:15:01.279 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:01.279 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24972 -i 0 00:15:01.279 [2024-07-25 10:03:40.410363] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24972: invalid cntlid range [0-65519] 00:15:01.541 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:01.541 { 00:15:01.541 "nqn": "nqn.2016-06.io.spdk:cnode24972", 00:15:01.541 "min_cntlid": 0, 00:15:01.541 "method": "nvmf_create_subsystem", 00:15:01.541 "req_id": 1 00:15:01.541 } 00:15:01.541 Got JSON-RPC error response 00:15:01.541 response: 00:15:01.541 { 00:15:01.541 "code": -32602, 00:15:01.541 "message": "Invalid cntlid range [0-65519]" 00:15:01.541 }' 00:15:01.541 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:01.541 { 00:15:01.541 "nqn": "nqn.2016-06.io.spdk:cnode24972", 00:15:01.541 "min_cntlid": 0, 00:15:01.541 "method": "nvmf_create_subsystem", 00:15:01.541 "req_id": 1 00:15:01.541 } 00:15:01.541 Got JSON-RPC error response 00:15:01.541 response: 00:15:01.541 { 00:15:01.541 "code": -32602, 00:15:01.541 "message": "Invalid cntlid range [0-65519]" 00:15:01.541 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.541 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25257 -i 65520 00:15:01.541 [2024-07-25 10:03:40.586926] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25257: invalid cntlid range [65520-65519] 00:15:01.541 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:01.541 { 00:15:01.541 "nqn": "nqn.2016-06.io.spdk:cnode25257", 00:15:01.541 "min_cntlid": 65520, 00:15:01.541 "method": "nvmf_create_subsystem", 00:15:01.541 "req_id": 1 00:15:01.541 } 00:15:01.541 Got JSON-RPC error response 00:15:01.541 response: 00:15:01.541 { 00:15:01.541 "code": -32602, 00:15:01.541 "message": "Invalid cntlid range [65520-65519]" 00:15:01.541 }' 00:15:01.541 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:01.541 { 00:15:01.541 "nqn": "nqn.2016-06.io.spdk:cnode25257", 00:15:01.541 "min_cntlid": 65520, 00:15:01.541 "method": "nvmf_create_subsystem", 00:15:01.541 "req_id": 1 00:15:01.541 } 00:15:01.541 Got JSON-RPC error response 00:15:01.541 response: 00:15:01.541 { 00:15:01.541 "code": -32602, 00:15:01.541 "message": "Invalid cntlid range [65520-65519]" 00:15:01.541 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.541 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17245 -I 0 00:15:01.802 [2024-07-25 10:03:40.763497] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17245: invalid cntlid range [1-0] 00:15:01.802 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:01.802 { 00:15:01.802 "nqn": "nqn.2016-06.io.spdk:cnode17245", 00:15:01.802 "max_cntlid": 0, 00:15:01.802 "method": "nvmf_create_subsystem", 00:15:01.802 "req_id": 1 00:15:01.802 } 00:15:01.802 Got JSON-RPC error response 00:15:01.802 response: 00:15:01.802 { 00:15:01.802 "code": -32602, 00:15:01.802 "message": "Invalid cntlid range [1-0]" 00:15:01.802 }' 00:15:01.802 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:01.803 { 00:15:01.803 "nqn": "nqn.2016-06.io.spdk:cnode17245", 00:15:01.803 "max_cntlid": 0, 00:15:01.803 "method": "nvmf_create_subsystem", 00:15:01.803 "req_id": 1 00:15:01.803 } 00:15:01.803 Got JSON-RPC error response 00:15:01.803 response: 00:15:01.803 { 00:15:01.803 "code": -32602, 00:15:01.803 "message": "Invalid cntlid range [1-0]" 00:15:01.803 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:01.803 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15019 -I 65520 00:15:01.803 [2024-07-25 10:03:40.932060] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15019: invalid cntlid range [1-65520] 00:15:02.064 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:02.064 { 00:15:02.064 "nqn": "nqn.2016-06.io.spdk:cnode15019", 00:15:02.064 "max_cntlid": 65520, 00:15:02.064 "method": "nvmf_create_subsystem", 00:15:02.064 "req_id": 1 00:15:02.064 } 00:15:02.064 Got JSON-RPC error response 00:15:02.064 response: 00:15:02.064 { 00:15:02.064 "code": -32602, 00:15:02.064 "message": "Invalid cntlid range [1-65520]" 00:15:02.064 }' 00:15:02.064 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:02.064 { 00:15:02.064 "nqn": "nqn.2016-06.io.spdk:cnode15019", 00:15:02.064 "max_cntlid": 65520, 00:15:02.064 "method": "nvmf_create_subsystem", 00:15:02.064 "req_id": 1 00:15:02.064 } 00:15:02.064 Got JSON-RPC error response 00:15:02.064 response: 00:15:02.064 { 00:15:02.064 "code": -32602, 00:15:02.064 "message": "Invalid cntlid range [1-65520]" 00:15:02.064 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:02.064 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13515 -i 6 -I 5 00:15:02.064 [2024-07-25 10:03:41.104574] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13515: invalid cntlid range [6-5] 00:15:02.064 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:02.064 { 00:15:02.064 "nqn": "nqn.2016-06.io.spdk:cnode13515", 00:15:02.064 "min_cntlid": 6, 00:15:02.064 "max_cntlid": 5, 00:15:02.064 "method": "nvmf_create_subsystem", 00:15:02.064 "req_id": 1 00:15:02.064 } 00:15:02.064 Got JSON-RPC error response 00:15:02.064 response: 00:15:02.064 { 00:15:02.064 "code": -32602, 00:15:02.064 "message": "Invalid cntlid range [6-5]" 00:15:02.064 }' 00:15:02.064 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:02.064 { 00:15:02.064 "nqn": "nqn.2016-06.io.spdk:cnode13515", 00:15:02.064 "min_cntlid": 6, 00:15:02.064 "max_cntlid": 5, 00:15:02.064 "method": "nvmf_create_subsystem", 00:15:02.064 "req_id": 1 00:15:02.064 } 00:15:02.064 Got JSON-RPC error response 00:15:02.064 response: 00:15:02.064 { 00:15:02.064 "code": -32602, 00:15:02.064 "message": "Invalid cntlid range [6-5]" 00:15:02.064 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:02.064 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:02.325 { 00:15:02.325 "name": "foobar", 00:15:02.325 "method": "nvmf_delete_target", 00:15:02.325 "req_id": 1 00:15:02.325 } 00:15:02.325 Got JSON-RPC error response 00:15:02.325 response: 00:15:02.325 { 00:15:02.325 "code": -32602, 00:15:02.325 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:02.325 }' 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:02.325 { 00:15:02.325 "name": "foobar", 00:15:02.325 "method": "nvmf_delete_target", 00:15:02.325 "req_id": 1 00:15:02.325 } 00:15:02.325 Got JSON-RPC error response 00:15:02.325 response: 00:15:02.325 { 00:15:02.325 "code": -32602, 00:15:02.325 "message": "The specified target doesn't exist, cannot delete it." 00:15:02.325 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:02.325 rmmod nvme_tcp 00:15:02.325 rmmod nvme_fabrics 00:15:02.325 rmmod nvme_keyring 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1236657 ']' 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1236657 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1236657 ']' 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1236657 00:15:02.325 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1236657 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1236657' 00:15:02.326 killing process with pid 1236657 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1236657 00:15:02.326 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1236657 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:02.587 10:03:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:04.505 00:15:04.505 real 0m13.409s 00:15:04.505 user 0m18.910s 00:15:04.505 sys 0m6.458s 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:04.505 ************************************ 00:15:04.505 END TEST nvmf_invalid 00:15:04.505 ************************************ 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:04.505 ************************************ 00:15:04.505 START TEST nvmf_connect_stress 00:15:04.505 ************************************ 00:15:04.505 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:04.766 * Looking for test storage... 00:15:04.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:04.766 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:04.767 10:03:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:12.914 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:12.914 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:12.914 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:12.914 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:12.914 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.706 ms 00:15:12.915 00:15:12.915 --- 10.0.0.2 ping statistics --- 00:15:12.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.915 rtt min/avg/max/mdev = 0.706/0.706/0.706/0.000 ms 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:15:12.915 00:15:12.915 --- 10.0.0.1 ping statistics --- 00:15:12.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.915 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1241818 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1241818 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1241818 ']' 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.915 10:03:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 [2024-07-25 10:03:51.034949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:12.915 [2024-07-25 10:03:51.035049] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.915 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.915 [2024-07-25 10:03:51.126057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:12.915 [2024-07-25 10:03:51.219172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.915 [2024-07-25 10:03:51.219237] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.915 [2024-07-25 10:03:51.219245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.915 [2024-07-25 10:03:51.219253] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.915 [2024-07-25 10:03:51.219258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.915 [2024-07-25 10:03:51.219386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.915 [2024-07-25 10:03:51.219556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.915 [2024-07-25 10:03:51.219557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 [2024-07-25 10:03:51.865055] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 [2024-07-25 10:03:51.906117] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:12.915 NULL1 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1241914 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.915 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.523 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.523 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:13.523 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.523 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.523 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.784 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.784 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:13.784 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.784 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.784 10:03:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.045 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.045 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:14.045 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.045 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.045 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.305 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.305 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:14.305 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.305 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.305 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.566 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.566 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:14.566 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.566 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.566 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.138 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.138 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:15.138 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.138 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.138 10:03:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.399 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.399 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:15.399 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.399 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.399 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.659 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.659 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:15.659 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.659 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.659 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.921 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.921 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:15.921 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.921 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.921 10:03:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.182 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.182 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:16.182 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.182 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.182 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.753 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.753 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:16.753 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.753 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.753 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.014 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.014 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:17.014 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.014 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.014 10:03:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.274 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.274 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:17.274 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.274 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.274 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.535 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.535 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:17.535 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.535 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.535 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.796 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.796 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:17.796 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.796 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.796 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.367 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.367 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:18.367 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.367 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.367 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.628 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.628 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:18.628 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.628 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.628 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.889 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.889 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:18.889 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.889 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.889 10:03:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.149 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.149 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:19.149 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.149 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.149 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.410 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.410 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:19.410 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.410 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.410 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.981 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.982 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:19.982 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.982 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.982 10:03:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.243 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.243 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:20.243 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.243 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.243 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.504 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.504 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:20.504 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.504 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.504 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.765 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.765 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:20.765 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.765 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.765 10:03:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.337 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.337 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:21.337 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.337 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.337 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.599 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.599 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:21.599 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.599 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.599 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.860 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.860 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:21.860 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.860 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.860 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.120 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.120 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:22.120 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.120 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.120 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.381 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.381 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:22.381 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.381 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.381 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.952 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.952 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:22.952 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.952 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.952 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.952 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1241914 00:15:23.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1241914) - No such process 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1241914 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:23.213 rmmod nvme_tcp 00:15:23.213 rmmod nvme_fabrics 00:15:23.213 rmmod nvme_keyring 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1241818 ']' 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1241818 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1241818 ']' 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1241818 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1241818 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1241818' 00:15:23.213 killing process with pid 1241818 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1241818 00:15:23.213 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1241818 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:23.475 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:25.447 00:15:25.447 real 0m20.782s 00:15:25.447 user 0m42.107s 00:15:25.447 sys 0m8.494s 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.447 ************************************ 00:15:25.447 END TEST nvmf_connect_stress 00:15:25.447 ************************************ 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.447 ************************************ 00:15:25.447 START TEST nvmf_fused_ordering 00:15:25.447 ************************************ 00:15:25.447 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:25.708 * Looking for test storage... 00:15:25.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.708 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.708 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:25.708 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.708 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.708 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:25.709 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:33.851 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:33.852 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:33.852 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:33.852 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:33.852 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:15:33.852 00:15:33.852 --- 10.0.0.2 ping statistics --- 00:15:33.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.852 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:15:33.852 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:15:33.852 00:15:33.853 --- 10.0.0.1 ping statistics --- 00:15:33.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.853 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1248198 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1248198 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1248198 ']' 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:33.853 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 [2024-07-25 10:04:11.927570] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:33.853 [2024-07-25 10:04:11.927635] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.853 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.853 [2024-07-25 10:04:12.016124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.853 [2024-07-25 10:04:12.107883] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.853 [2024-07-25 10:04:12.107942] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.853 [2024-07-25 10:04:12.107950] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.853 [2024-07-25 10:04:12.107957] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.853 [2024-07-25 10:04:12.107963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.853 [2024-07-25 10:04:12.107997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 [2024-07-25 10:04:12.783568] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 [2024-07-25 10:04:12.799793] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 NULL1 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.853 10:04:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:33.853 [2024-07-25 10:04:12.857772] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:33.853 [2024-07-25 10:04:12.857816] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248333 ] 00:15:33.853 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.435 Attached to nqn.2016-06.io.spdk:cnode1 00:15:34.435 Namespace ID: 1 size: 1GB 00:15:34.435 fused_ordering(0) 00:15:34.435 fused_ordering(1) 00:15:34.435 fused_ordering(2) 00:15:34.435 fused_ordering(3) 00:15:34.435 fused_ordering(4) 00:15:34.435 fused_ordering(5) 00:15:34.435 fused_ordering(6) 00:15:34.436 fused_ordering(7) 00:15:34.436 fused_ordering(8) 00:15:34.436 fused_ordering(9) 00:15:34.436 fused_ordering(10) 00:15:34.436 fused_ordering(11) 00:15:34.436 fused_ordering(12) 00:15:34.436 fused_ordering(13) 00:15:34.436 fused_ordering(14) 00:15:34.436 fused_ordering(15) 00:15:34.436 fused_ordering(16) 00:15:34.436 fused_ordering(17) 00:15:34.436 fused_ordering(18) 00:15:34.436 fused_ordering(19) 00:15:34.436 fused_ordering(20) 00:15:34.436 fused_ordering(21) 00:15:34.436 fused_ordering(22) 00:15:34.436 fused_ordering(23) 00:15:34.436 fused_ordering(24) 00:15:34.436 fused_ordering(25) 00:15:34.436 fused_ordering(26) 00:15:34.436 fused_ordering(27) 00:15:34.436 fused_ordering(28) 00:15:34.436 fused_ordering(29) 00:15:34.436 fused_ordering(30) 00:15:34.436 fused_ordering(31) 00:15:34.436 fused_ordering(32) 00:15:34.436 fused_ordering(33) 00:15:34.436 fused_ordering(34) 00:15:34.436 fused_ordering(35) 00:15:34.436 fused_ordering(36) 00:15:34.436 fused_ordering(37) 00:15:34.436 fused_ordering(38) 00:15:34.436 fused_ordering(39) 00:15:34.436 fused_ordering(40) 00:15:34.436 fused_ordering(41) 00:15:34.436 fused_ordering(42) 00:15:34.436 fused_ordering(43) 00:15:34.436 fused_ordering(44) 00:15:34.436 fused_ordering(45) 00:15:34.436 fused_ordering(46) 00:15:34.436 fused_ordering(47) 00:15:34.436 fused_ordering(48) 00:15:34.436 fused_ordering(49) 00:15:34.436 fused_ordering(50) 00:15:34.436 fused_ordering(51) 00:15:34.436 fused_ordering(52) 00:15:34.436 fused_ordering(53) 00:15:34.436 fused_ordering(54) 00:15:34.436 fused_ordering(55) 00:15:34.436 fused_ordering(56) 00:15:34.436 fused_ordering(57) 00:15:34.436 fused_ordering(58) 00:15:34.436 fused_ordering(59) 00:15:34.436 fused_ordering(60) 00:15:34.436 fused_ordering(61) 00:15:34.436 fused_ordering(62) 00:15:34.436 fused_ordering(63) 00:15:34.436 fused_ordering(64) 00:15:34.436 fused_ordering(65) 00:15:34.436 fused_ordering(66) 00:15:34.436 fused_ordering(67) 00:15:34.436 fused_ordering(68) 00:15:34.436 fused_ordering(69) 00:15:34.436 fused_ordering(70) 00:15:34.436 fused_ordering(71) 00:15:34.436 fused_ordering(72) 00:15:34.436 fused_ordering(73) 00:15:34.436 fused_ordering(74) 00:15:34.436 fused_ordering(75) 00:15:34.436 fused_ordering(76) 00:15:34.436 fused_ordering(77) 00:15:34.436 fused_ordering(78) 00:15:34.436 fused_ordering(79) 00:15:34.436 fused_ordering(80) 00:15:34.436 fused_ordering(81) 00:15:34.436 fused_ordering(82) 00:15:34.436 fused_ordering(83) 00:15:34.436 fused_ordering(84) 00:15:34.436 fused_ordering(85) 00:15:34.436 fused_ordering(86) 00:15:34.436 fused_ordering(87) 00:15:34.436 fused_ordering(88) 00:15:34.436 fused_ordering(89) 00:15:34.436 fused_ordering(90) 00:15:34.436 fused_ordering(91) 00:15:34.436 fused_ordering(92) 00:15:34.436 fused_ordering(93) 00:15:34.436 fused_ordering(94) 00:15:34.436 fused_ordering(95) 00:15:34.436 fused_ordering(96) 00:15:34.436 fused_ordering(97) 00:15:34.436 fused_ordering(98) 00:15:34.436 fused_ordering(99) 00:15:34.436 fused_ordering(100) 00:15:34.436 fused_ordering(101) 00:15:34.436 fused_ordering(102) 00:15:34.436 fused_ordering(103) 00:15:34.436 fused_ordering(104) 00:15:34.436 fused_ordering(105) 00:15:34.436 fused_ordering(106) 00:15:34.436 fused_ordering(107) 00:15:34.436 fused_ordering(108) 00:15:34.436 fused_ordering(109) 00:15:34.436 fused_ordering(110) 00:15:34.436 fused_ordering(111) 00:15:34.436 fused_ordering(112) 00:15:34.436 fused_ordering(113) 00:15:34.436 fused_ordering(114) 00:15:34.436 fused_ordering(115) 00:15:34.436 fused_ordering(116) 00:15:34.436 fused_ordering(117) 00:15:34.436 fused_ordering(118) 00:15:34.436 fused_ordering(119) 00:15:34.436 fused_ordering(120) 00:15:34.436 fused_ordering(121) 00:15:34.436 fused_ordering(122) 00:15:34.436 fused_ordering(123) 00:15:34.436 fused_ordering(124) 00:15:34.436 fused_ordering(125) 00:15:34.436 fused_ordering(126) 00:15:34.436 fused_ordering(127) 00:15:34.436 fused_ordering(128) 00:15:34.436 fused_ordering(129) 00:15:34.436 fused_ordering(130) 00:15:34.436 fused_ordering(131) 00:15:34.436 fused_ordering(132) 00:15:34.436 fused_ordering(133) 00:15:34.436 fused_ordering(134) 00:15:34.436 fused_ordering(135) 00:15:34.436 fused_ordering(136) 00:15:34.436 fused_ordering(137) 00:15:34.436 fused_ordering(138) 00:15:34.436 fused_ordering(139) 00:15:34.436 fused_ordering(140) 00:15:34.436 fused_ordering(141) 00:15:34.436 fused_ordering(142) 00:15:34.436 fused_ordering(143) 00:15:34.436 fused_ordering(144) 00:15:34.436 fused_ordering(145) 00:15:34.436 fused_ordering(146) 00:15:34.436 fused_ordering(147) 00:15:34.436 fused_ordering(148) 00:15:34.436 fused_ordering(149) 00:15:34.436 fused_ordering(150) 00:15:34.436 fused_ordering(151) 00:15:34.436 fused_ordering(152) 00:15:34.436 fused_ordering(153) 00:15:34.436 fused_ordering(154) 00:15:34.436 fused_ordering(155) 00:15:34.436 fused_ordering(156) 00:15:34.436 fused_ordering(157) 00:15:34.436 fused_ordering(158) 00:15:34.436 fused_ordering(159) 00:15:34.436 fused_ordering(160) 00:15:34.436 fused_ordering(161) 00:15:34.436 fused_ordering(162) 00:15:34.436 fused_ordering(163) 00:15:34.436 fused_ordering(164) 00:15:34.436 fused_ordering(165) 00:15:34.436 fused_ordering(166) 00:15:34.436 fused_ordering(167) 00:15:34.436 fused_ordering(168) 00:15:34.436 fused_ordering(169) 00:15:34.436 fused_ordering(170) 00:15:34.436 fused_ordering(171) 00:15:34.436 fused_ordering(172) 00:15:34.436 fused_ordering(173) 00:15:34.436 fused_ordering(174) 00:15:34.436 fused_ordering(175) 00:15:34.436 fused_ordering(176) 00:15:34.436 fused_ordering(177) 00:15:34.436 fused_ordering(178) 00:15:34.436 fused_ordering(179) 00:15:34.436 fused_ordering(180) 00:15:34.436 fused_ordering(181) 00:15:34.436 fused_ordering(182) 00:15:34.436 fused_ordering(183) 00:15:34.436 fused_ordering(184) 00:15:34.436 fused_ordering(185) 00:15:34.436 fused_ordering(186) 00:15:34.436 fused_ordering(187) 00:15:34.436 fused_ordering(188) 00:15:34.436 fused_ordering(189) 00:15:34.436 fused_ordering(190) 00:15:34.436 fused_ordering(191) 00:15:34.436 fused_ordering(192) 00:15:34.436 fused_ordering(193) 00:15:34.436 fused_ordering(194) 00:15:34.436 fused_ordering(195) 00:15:34.436 fused_ordering(196) 00:15:34.436 fused_ordering(197) 00:15:34.436 fused_ordering(198) 00:15:34.436 fused_ordering(199) 00:15:34.436 fused_ordering(200) 00:15:34.436 fused_ordering(201) 00:15:34.436 fused_ordering(202) 00:15:34.436 fused_ordering(203) 00:15:34.436 fused_ordering(204) 00:15:34.436 fused_ordering(205) 00:15:35.081 fused_ordering(206) 00:15:35.081 fused_ordering(207) 00:15:35.081 fused_ordering(208) 00:15:35.081 fused_ordering(209) 00:15:35.081 fused_ordering(210) 00:15:35.081 fused_ordering(211) 00:15:35.081 fused_ordering(212) 00:15:35.081 fused_ordering(213) 00:15:35.081 fused_ordering(214) 00:15:35.081 fused_ordering(215) 00:15:35.081 fused_ordering(216) 00:15:35.081 fused_ordering(217) 00:15:35.081 fused_ordering(218) 00:15:35.081 fused_ordering(219) 00:15:35.081 fused_ordering(220) 00:15:35.081 fused_ordering(221) 00:15:35.081 fused_ordering(222) 00:15:35.081 fused_ordering(223) 00:15:35.081 fused_ordering(224) 00:15:35.081 fused_ordering(225) 00:15:35.081 fused_ordering(226) 00:15:35.081 fused_ordering(227) 00:15:35.081 fused_ordering(228) 00:15:35.081 fused_ordering(229) 00:15:35.081 fused_ordering(230) 00:15:35.081 fused_ordering(231) 00:15:35.081 fused_ordering(232) 00:15:35.081 fused_ordering(233) 00:15:35.081 fused_ordering(234) 00:15:35.081 fused_ordering(235) 00:15:35.081 fused_ordering(236) 00:15:35.081 fused_ordering(237) 00:15:35.081 fused_ordering(238) 00:15:35.081 fused_ordering(239) 00:15:35.081 fused_ordering(240) 00:15:35.081 fused_ordering(241) 00:15:35.081 fused_ordering(242) 00:15:35.081 fused_ordering(243) 00:15:35.081 fused_ordering(244) 00:15:35.081 fused_ordering(245) 00:15:35.081 fused_ordering(246) 00:15:35.081 fused_ordering(247) 00:15:35.081 fused_ordering(248) 00:15:35.081 fused_ordering(249) 00:15:35.081 fused_ordering(250) 00:15:35.081 fused_ordering(251) 00:15:35.081 fused_ordering(252) 00:15:35.081 fused_ordering(253) 00:15:35.081 fused_ordering(254) 00:15:35.081 fused_ordering(255) 00:15:35.081 fused_ordering(256) 00:15:35.081 fused_ordering(257) 00:15:35.081 fused_ordering(258) 00:15:35.081 fused_ordering(259) 00:15:35.081 fused_ordering(260) 00:15:35.081 fused_ordering(261) 00:15:35.081 fused_ordering(262) 00:15:35.081 fused_ordering(263) 00:15:35.081 fused_ordering(264) 00:15:35.081 fused_ordering(265) 00:15:35.081 fused_ordering(266) 00:15:35.081 fused_ordering(267) 00:15:35.081 fused_ordering(268) 00:15:35.081 fused_ordering(269) 00:15:35.081 fused_ordering(270) 00:15:35.081 fused_ordering(271) 00:15:35.081 fused_ordering(272) 00:15:35.081 fused_ordering(273) 00:15:35.081 fused_ordering(274) 00:15:35.081 fused_ordering(275) 00:15:35.081 fused_ordering(276) 00:15:35.081 fused_ordering(277) 00:15:35.081 fused_ordering(278) 00:15:35.081 fused_ordering(279) 00:15:35.081 fused_ordering(280) 00:15:35.081 fused_ordering(281) 00:15:35.081 fused_ordering(282) 00:15:35.081 fused_ordering(283) 00:15:35.082 fused_ordering(284) 00:15:35.082 fused_ordering(285) 00:15:35.082 fused_ordering(286) 00:15:35.082 fused_ordering(287) 00:15:35.082 fused_ordering(288) 00:15:35.082 fused_ordering(289) 00:15:35.082 fused_ordering(290) 00:15:35.082 fused_ordering(291) 00:15:35.082 fused_ordering(292) 00:15:35.082 fused_ordering(293) 00:15:35.082 fused_ordering(294) 00:15:35.082 fused_ordering(295) 00:15:35.082 fused_ordering(296) 00:15:35.082 fused_ordering(297) 00:15:35.082 fused_ordering(298) 00:15:35.082 fused_ordering(299) 00:15:35.082 fused_ordering(300) 00:15:35.082 fused_ordering(301) 00:15:35.082 fused_ordering(302) 00:15:35.082 fused_ordering(303) 00:15:35.082 fused_ordering(304) 00:15:35.082 fused_ordering(305) 00:15:35.082 fused_ordering(306) 00:15:35.082 fused_ordering(307) 00:15:35.082 fused_ordering(308) 00:15:35.082 fused_ordering(309) 00:15:35.082 fused_ordering(310) 00:15:35.082 fused_ordering(311) 00:15:35.082 fused_ordering(312) 00:15:35.082 fused_ordering(313) 00:15:35.082 fused_ordering(314) 00:15:35.082 fused_ordering(315) 00:15:35.082 fused_ordering(316) 00:15:35.082 fused_ordering(317) 00:15:35.082 fused_ordering(318) 00:15:35.082 fused_ordering(319) 00:15:35.082 fused_ordering(320) 00:15:35.082 fused_ordering(321) 00:15:35.082 fused_ordering(322) 00:15:35.082 fused_ordering(323) 00:15:35.082 fused_ordering(324) 00:15:35.082 fused_ordering(325) 00:15:35.082 fused_ordering(326) 00:15:35.082 fused_ordering(327) 00:15:35.082 fused_ordering(328) 00:15:35.082 fused_ordering(329) 00:15:35.082 fused_ordering(330) 00:15:35.082 fused_ordering(331) 00:15:35.082 fused_ordering(332) 00:15:35.082 fused_ordering(333) 00:15:35.082 fused_ordering(334) 00:15:35.082 fused_ordering(335) 00:15:35.082 fused_ordering(336) 00:15:35.082 fused_ordering(337) 00:15:35.082 fused_ordering(338) 00:15:35.082 fused_ordering(339) 00:15:35.082 fused_ordering(340) 00:15:35.082 fused_ordering(341) 00:15:35.082 fused_ordering(342) 00:15:35.082 fused_ordering(343) 00:15:35.082 fused_ordering(344) 00:15:35.082 fused_ordering(345) 00:15:35.082 fused_ordering(346) 00:15:35.082 fused_ordering(347) 00:15:35.082 fused_ordering(348) 00:15:35.082 fused_ordering(349) 00:15:35.082 fused_ordering(350) 00:15:35.082 fused_ordering(351) 00:15:35.082 fused_ordering(352) 00:15:35.082 fused_ordering(353) 00:15:35.082 fused_ordering(354) 00:15:35.082 fused_ordering(355) 00:15:35.082 fused_ordering(356) 00:15:35.082 fused_ordering(357) 00:15:35.082 fused_ordering(358) 00:15:35.082 fused_ordering(359) 00:15:35.082 fused_ordering(360) 00:15:35.082 fused_ordering(361) 00:15:35.082 fused_ordering(362) 00:15:35.082 fused_ordering(363) 00:15:35.082 fused_ordering(364) 00:15:35.082 fused_ordering(365) 00:15:35.082 fused_ordering(366) 00:15:35.082 fused_ordering(367) 00:15:35.082 fused_ordering(368) 00:15:35.082 fused_ordering(369) 00:15:35.082 fused_ordering(370) 00:15:35.082 fused_ordering(371) 00:15:35.082 fused_ordering(372) 00:15:35.082 fused_ordering(373) 00:15:35.082 fused_ordering(374) 00:15:35.082 fused_ordering(375) 00:15:35.082 fused_ordering(376) 00:15:35.082 fused_ordering(377) 00:15:35.082 fused_ordering(378) 00:15:35.082 fused_ordering(379) 00:15:35.082 fused_ordering(380) 00:15:35.082 fused_ordering(381) 00:15:35.082 fused_ordering(382) 00:15:35.082 fused_ordering(383) 00:15:35.082 fused_ordering(384) 00:15:35.082 fused_ordering(385) 00:15:35.082 fused_ordering(386) 00:15:35.082 fused_ordering(387) 00:15:35.082 fused_ordering(388) 00:15:35.082 fused_ordering(389) 00:15:35.082 fused_ordering(390) 00:15:35.082 fused_ordering(391) 00:15:35.082 fused_ordering(392) 00:15:35.082 fused_ordering(393) 00:15:35.082 fused_ordering(394) 00:15:35.082 fused_ordering(395) 00:15:35.082 fused_ordering(396) 00:15:35.082 fused_ordering(397) 00:15:35.082 fused_ordering(398) 00:15:35.082 fused_ordering(399) 00:15:35.082 fused_ordering(400) 00:15:35.082 fused_ordering(401) 00:15:35.082 fused_ordering(402) 00:15:35.082 fused_ordering(403) 00:15:35.082 fused_ordering(404) 00:15:35.082 fused_ordering(405) 00:15:35.082 fused_ordering(406) 00:15:35.082 fused_ordering(407) 00:15:35.082 fused_ordering(408) 00:15:35.082 fused_ordering(409) 00:15:35.082 fused_ordering(410) 00:15:35.653 fused_ordering(411) 00:15:35.653 fused_ordering(412) 00:15:35.653 fused_ordering(413) 00:15:35.653 fused_ordering(414) 00:15:35.653 fused_ordering(415) 00:15:35.653 fused_ordering(416) 00:15:35.653 fused_ordering(417) 00:15:35.653 fused_ordering(418) 00:15:35.653 fused_ordering(419) 00:15:35.653 fused_ordering(420) 00:15:35.653 fused_ordering(421) 00:15:35.653 fused_ordering(422) 00:15:35.653 fused_ordering(423) 00:15:35.653 fused_ordering(424) 00:15:35.653 fused_ordering(425) 00:15:35.653 fused_ordering(426) 00:15:35.653 fused_ordering(427) 00:15:35.653 fused_ordering(428) 00:15:35.653 fused_ordering(429) 00:15:35.653 fused_ordering(430) 00:15:35.653 fused_ordering(431) 00:15:35.653 fused_ordering(432) 00:15:35.653 fused_ordering(433) 00:15:35.653 fused_ordering(434) 00:15:35.653 fused_ordering(435) 00:15:35.653 fused_ordering(436) 00:15:35.653 fused_ordering(437) 00:15:35.653 fused_ordering(438) 00:15:35.653 fused_ordering(439) 00:15:35.653 fused_ordering(440) 00:15:35.653 fused_ordering(441) 00:15:35.653 fused_ordering(442) 00:15:35.653 fused_ordering(443) 00:15:35.653 fused_ordering(444) 00:15:35.653 fused_ordering(445) 00:15:35.653 fused_ordering(446) 00:15:35.653 fused_ordering(447) 00:15:35.653 fused_ordering(448) 00:15:35.653 fused_ordering(449) 00:15:35.653 fused_ordering(450) 00:15:35.653 fused_ordering(451) 00:15:35.653 fused_ordering(452) 00:15:35.653 fused_ordering(453) 00:15:35.653 fused_ordering(454) 00:15:35.653 fused_ordering(455) 00:15:35.653 fused_ordering(456) 00:15:35.653 fused_ordering(457) 00:15:35.653 fused_ordering(458) 00:15:35.653 fused_ordering(459) 00:15:35.653 fused_ordering(460) 00:15:35.653 fused_ordering(461) 00:15:35.653 fused_ordering(462) 00:15:35.653 fused_ordering(463) 00:15:35.653 fused_ordering(464) 00:15:35.653 fused_ordering(465) 00:15:35.653 fused_ordering(466) 00:15:35.653 fused_ordering(467) 00:15:35.653 fused_ordering(468) 00:15:35.653 fused_ordering(469) 00:15:35.653 fused_ordering(470) 00:15:35.653 fused_ordering(471) 00:15:35.653 fused_ordering(472) 00:15:35.653 fused_ordering(473) 00:15:35.653 fused_ordering(474) 00:15:35.653 fused_ordering(475) 00:15:35.653 fused_ordering(476) 00:15:35.653 fused_ordering(477) 00:15:35.653 fused_ordering(478) 00:15:35.653 fused_ordering(479) 00:15:35.653 fused_ordering(480) 00:15:35.653 fused_ordering(481) 00:15:35.653 fused_ordering(482) 00:15:35.653 fused_ordering(483) 00:15:35.653 fused_ordering(484) 00:15:35.653 fused_ordering(485) 00:15:35.653 fused_ordering(486) 00:15:35.653 fused_ordering(487) 00:15:35.653 fused_ordering(488) 00:15:35.653 fused_ordering(489) 00:15:35.653 fused_ordering(490) 00:15:35.653 fused_ordering(491) 00:15:35.653 fused_ordering(492) 00:15:35.653 fused_ordering(493) 00:15:35.653 fused_ordering(494) 00:15:35.653 fused_ordering(495) 00:15:35.653 fused_ordering(496) 00:15:35.653 fused_ordering(497) 00:15:35.653 fused_ordering(498) 00:15:35.653 fused_ordering(499) 00:15:35.653 fused_ordering(500) 00:15:35.653 fused_ordering(501) 00:15:35.653 fused_ordering(502) 00:15:35.654 fused_ordering(503) 00:15:35.654 fused_ordering(504) 00:15:35.654 fused_ordering(505) 00:15:35.654 fused_ordering(506) 00:15:35.654 fused_ordering(507) 00:15:35.654 fused_ordering(508) 00:15:35.654 fused_ordering(509) 00:15:35.654 fused_ordering(510) 00:15:35.654 fused_ordering(511) 00:15:35.654 fused_ordering(512) 00:15:35.654 fused_ordering(513) 00:15:35.654 fused_ordering(514) 00:15:35.654 fused_ordering(515) 00:15:35.654 fused_ordering(516) 00:15:35.654 fused_ordering(517) 00:15:35.654 fused_ordering(518) 00:15:35.654 fused_ordering(519) 00:15:35.654 fused_ordering(520) 00:15:35.654 fused_ordering(521) 00:15:35.654 fused_ordering(522) 00:15:35.654 fused_ordering(523) 00:15:35.654 fused_ordering(524) 00:15:35.654 fused_ordering(525) 00:15:35.654 fused_ordering(526) 00:15:35.654 fused_ordering(527) 00:15:35.654 fused_ordering(528) 00:15:35.654 fused_ordering(529) 00:15:35.654 fused_ordering(530) 00:15:35.654 fused_ordering(531) 00:15:35.654 fused_ordering(532) 00:15:35.654 fused_ordering(533) 00:15:35.654 fused_ordering(534) 00:15:35.654 fused_ordering(535) 00:15:35.654 fused_ordering(536) 00:15:35.654 fused_ordering(537) 00:15:35.654 fused_ordering(538) 00:15:35.654 fused_ordering(539) 00:15:35.654 fused_ordering(540) 00:15:35.654 fused_ordering(541) 00:15:35.654 fused_ordering(542) 00:15:35.654 fused_ordering(543) 00:15:35.654 fused_ordering(544) 00:15:35.654 fused_ordering(545) 00:15:35.654 fused_ordering(546) 00:15:35.654 fused_ordering(547) 00:15:35.654 fused_ordering(548) 00:15:35.654 fused_ordering(549) 00:15:35.654 fused_ordering(550) 00:15:35.654 fused_ordering(551) 00:15:35.654 fused_ordering(552) 00:15:35.654 fused_ordering(553) 00:15:35.654 fused_ordering(554) 00:15:35.654 fused_ordering(555) 00:15:35.654 fused_ordering(556) 00:15:35.654 fused_ordering(557) 00:15:35.654 fused_ordering(558) 00:15:35.654 fused_ordering(559) 00:15:35.654 fused_ordering(560) 00:15:35.654 fused_ordering(561) 00:15:35.654 fused_ordering(562) 00:15:35.654 fused_ordering(563) 00:15:35.654 fused_ordering(564) 00:15:35.654 fused_ordering(565) 00:15:35.654 fused_ordering(566) 00:15:35.654 fused_ordering(567) 00:15:35.654 fused_ordering(568) 00:15:35.654 fused_ordering(569) 00:15:35.654 fused_ordering(570) 00:15:35.654 fused_ordering(571) 00:15:35.654 fused_ordering(572) 00:15:35.654 fused_ordering(573) 00:15:35.654 fused_ordering(574) 00:15:35.654 fused_ordering(575) 00:15:35.654 fused_ordering(576) 00:15:35.654 fused_ordering(577) 00:15:35.654 fused_ordering(578) 00:15:35.654 fused_ordering(579) 00:15:35.654 fused_ordering(580) 00:15:35.654 fused_ordering(581) 00:15:35.654 fused_ordering(582) 00:15:35.654 fused_ordering(583) 00:15:35.654 fused_ordering(584) 00:15:35.654 fused_ordering(585) 00:15:35.654 fused_ordering(586) 00:15:35.654 fused_ordering(587) 00:15:35.654 fused_ordering(588) 00:15:35.654 fused_ordering(589) 00:15:35.654 fused_ordering(590) 00:15:35.654 fused_ordering(591) 00:15:35.654 fused_ordering(592) 00:15:35.654 fused_ordering(593) 00:15:35.654 fused_ordering(594) 00:15:35.654 fused_ordering(595) 00:15:35.654 fused_ordering(596) 00:15:35.654 fused_ordering(597) 00:15:35.654 fused_ordering(598) 00:15:35.654 fused_ordering(599) 00:15:35.654 fused_ordering(600) 00:15:35.654 fused_ordering(601) 00:15:35.654 fused_ordering(602) 00:15:35.654 fused_ordering(603) 00:15:35.654 fused_ordering(604) 00:15:35.654 fused_ordering(605) 00:15:35.654 fused_ordering(606) 00:15:35.654 fused_ordering(607) 00:15:35.654 fused_ordering(608) 00:15:35.654 fused_ordering(609) 00:15:35.654 fused_ordering(610) 00:15:35.654 fused_ordering(611) 00:15:35.654 fused_ordering(612) 00:15:35.654 fused_ordering(613) 00:15:35.654 fused_ordering(614) 00:15:35.654 fused_ordering(615) 00:15:36.225 fused_ordering(616) 00:15:36.225 fused_ordering(617) 00:15:36.225 fused_ordering(618) 00:15:36.225 fused_ordering(619) 00:15:36.225 fused_ordering(620) 00:15:36.225 fused_ordering(621) 00:15:36.225 fused_ordering(622) 00:15:36.225 fused_ordering(623) 00:15:36.225 fused_ordering(624) 00:15:36.225 fused_ordering(625) 00:15:36.225 fused_ordering(626) 00:15:36.225 fused_ordering(627) 00:15:36.225 fused_ordering(628) 00:15:36.225 fused_ordering(629) 00:15:36.225 fused_ordering(630) 00:15:36.225 fused_ordering(631) 00:15:36.225 fused_ordering(632) 00:15:36.225 fused_ordering(633) 00:15:36.225 fused_ordering(634) 00:15:36.225 fused_ordering(635) 00:15:36.225 fused_ordering(636) 00:15:36.225 fused_ordering(637) 00:15:36.225 fused_ordering(638) 00:15:36.225 fused_ordering(639) 00:15:36.225 fused_ordering(640) 00:15:36.225 fused_ordering(641) 00:15:36.225 fused_ordering(642) 00:15:36.225 fused_ordering(643) 00:15:36.225 fused_ordering(644) 00:15:36.225 fused_ordering(645) 00:15:36.225 fused_ordering(646) 00:15:36.225 fused_ordering(647) 00:15:36.225 fused_ordering(648) 00:15:36.225 fused_ordering(649) 00:15:36.225 fused_ordering(650) 00:15:36.225 fused_ordering(651) 00:15:36.225 fused_ordering(652) 00:15:36.225 fused_ordering(653) 00:15:36.225 fused_ordering(654) 00:15:36.225 fused_ordering(655) 00:15:36.225 fused_ordering(656) 00:15:36.225 fused_ordering(657) 00:15:36.225 fused_ordering(658) 00:15:36.225 fused_ordering(659) 00:15:36.225 fused_ordering(660) 00:15:36.225 fused_ordering(661) 00:15:36.225 fused_ordering(662) 00:15:36.225 fused_ordering(663) 00:15:36.225 fused_ordering(664) 00:15:36.225 fused_ordering(665) 00:15:36.225 fused_ordering(666) 00:15:36.225 fused_ordering(667) 00:15:36.225 fused_ordering(668) 00:15:36.225 fused_ordering(669) 00:15:36.225 fused_ordering(670) 00:15:36.225 fused_ordering(671) 00:15:36.225 fused_ordering(672) 00:15:36.225 fused_ordering(673) 00:15:36.225 fused_ordering(674) 00:15:36.225 fused_ordering(675) 00:15:36.225 fused_ordering(676) 00:15:36.225 fused_ordering(677) 00:15:36.225 fused_ordering(678) 00:15:36.225 fused_ordering(679) 00:15:36.225 fused_ordering(680) 00:15:36.225 fused_ordering(681) 00:15:36.225 fused_ordering(682) 00:15:36.225 fused_ordering(683) 00:15:36.225 fused_ordering(684) 00:15:36.225 fused_ordering(685) 00:15:36.225 fused_ordering(686) 00:15:36.225 fused_ordering(687) 00:15:36.225 fused_ordering(688) 00:15:36.225 fused_ordering(689) 00:15:36.225 fused_ordering(690) 00:15:36.225 fused_ordering(691) 00:15:36.225 fused_ordering(692) 00:15:36.225 fused_ordering(693) 00:15:36.225 fused_ordering(694) 00:15:36.225 fused_ordering(695) 00:15:36.226 fused_ordering(696) 00:15:36.226 fused_ordering(697) 00:15:36.226 fused_ordering(698) 00:15:36.226 fused_ordering(699) 00:15:36.226 fused_ordering(700) 00:15:36.226 fused_ordering(701) 00:15:36.226 fused_ordering(702) 00:15:36.226 fused_ordering(703) 00:15:36.226 fused_ordering(704) 00:15:36.226 fused_ordering(705) 00:15:36.226 fused_ordering(706) 00:15:36.226 fused_ordering(707) 00:15:36.226 fused_ordering(708) 00:15:36.226 fused_ordering(709) 00:15:36.226 fused_ordering(710) 00:15:36.226 fused_ordering(711) 00:15:36.226 fused_ordering(712) 00:15:36.226 fused_ordering(713) 00:15:36.226 fused_ordering(714) 00:15:36.226 fused_ordering(715) 00:15:36.226 fused_ordering(716) 00:15:36.226 fused_ordering(717) 00:15:36.226 fused_ordering(718) 00:15:36.226 fused_ordering(719) 00:15:36.226 fused_ordering(720) 00:15:36.226 fused_ordering(721) 00:15:36.226 fused_ordering(722) 00:15:36.226 fused_ordering(723) 00:15:36.226 fused_ordering(724) 00:15:36.226 fused_ordering(725) 00:15:36.226 fused_ordering(726) 00:15:36.226 fused_ordering(727) 00:15:36.226 fused_ordering(728) 00:15:36.226 fused_ordering(729) 00:15:36.226 fused_ordering(730) 00:15:36.226 fused_ordering(731) 00:15:36.226 fused_ordering(732) 00:15:36.226 fused_ordering(733) 00:15:36.226 fused_ordering(734) 00:15:36.226 fused_ordering(735) 00:15:36.226 fused_ordering(736) 00:15:36.226 fused_ordering(737) 00:15:36.226 fused_ordering(738) 00:15:36.226 fused_ordering(739) 00:15:36.226 fused_ordering(740) 00:15:36.226 fused_ordering(741) 00:15:36.226 fused_ordering(742) 00:15:36.226 fused_ordering(743) 00:15:36.226 fused_ordering(744) 00:15:36.226 fused_ordering(745) 00:15:36.226 fused_ordering(746) 00:15:36.226 fused_ordering(747) 00:15:36.226 fused_ordering(748) 00:15:36.226 fused_ordering(749) 00:15:36.226 fused_ordering(750) 00:15:36.226 fused_ordering(751) 00:15:36.226 fused_ordering(752) 00:15:36.226 fused_ordering(753) 00:15:36.226 fused_ordering(754) 00:15:36.226 fused_ordering(755) 00:15:36.226 fused_ordering(756) 00:15:36.226 fused_ordering(757) 00:15:36.226 fused_ordering(758) 00:15:36.226 fused_ordering(759) 00:15:36.226 fused_ordering(760) 00:15:36.226 fused_ordering(761) 00:15:36.226 fused_ordering(762) 00:15:36.226 fused_ordering(763) 00:15:36.226 fused_ordering(764) 00:15:36.226 fused_ordering(765) 00:15:36.226 fused_ordering(766) 00:15:36.226 fused_ordering(767) 00:15:36.226 fused_ordering(768) 00:15:36.226 fused_ordering(769) 00:15:36.226 fused_ordering(770) 00:15:36.226 fused_ordering(771) 00:15:36.226 fused_ordering(772) 00:15:36.226 fused_ordering(773) 00:15:36.226 fused_ordering(774) 00:15:36.226 fused_ordering(775) 00:15:36.226 fused_ordering(776) 00:15:36.226 fused_ordering(777) 00:15:36.226 fused_ordering(778) 00:15:36.226 fused_ordering(779) 00:15:36.226 fused_ordering(780) 00:15:36.226 fused_ordering(781) 00:15:36.226 fused_ordering(782) 00:15:36.226 fused_ordering(783) 00:15:36.226 fused_ordering(784) 00:15:36.226 fused_ordering(785) 00:15:36.226 fused_ordering(786) 00:15:36.226 fused_ordering(787) 00:15:36.226 fused_ordering(788) 00:15:36.226 fused_ordering(789) 00:15:36.226 fused_ordering(790) 00:15:36.226 fused_ordering(791) 00:15:36.226 fused_ordering(792) 00:15:36.226 fused_ordering(793) 00:15:36.226 fused_ordering(794) 00:15:36.226 fused_ordering(795) 00:15:36.226 fused_ordering(796) 00:15:36.226 fused_ordering(797) 00:15:36.226 fused_ordering(798) 00:15:36.226 fused_ordering(799) 00:15:36.226 fused_ordering(800) 00:15:36.226 fused_ordering(801) 00:15:36.226 fused_ordering(802) 00:15:36.226 fused_ordering(803) 00:15:36.226 fused_ordering(804) 00:15:36.226 fused_ordering(805) 00:15:36.226 fused_ordering(806) 00:15:36.226 fused_ordering(807) 00:15:36.226 fused_ordering(808) 00:15:36.226 fused_ordering(809) 00:15:36.226 fused_ordering(810) 00:15:36.226 fused_ordering(811) 00:15:36.226 fused_ordering(812) 00:15:36.226 fused_ordering(813) 00:15:36.226 fused_ordering(814) 00:15:36.226 fused_ordering(815) 00:15:36.226 fused_ordering(816) 00:15:36.226 fused_ordering(817) 00:15:36.226 fused_ordering(818) 00:15:36.226 fused_ordering(819) 00:15:36.226 fused_ordering(820) 00:15:37.169 fused_ordering(821) 00:15:37.169 fused_ordering(822) 00:15:37.169 fused_ordering(823) 00:15:37.169 fused_ordering(824) 00:15:37.169 fused_ordering(825) 00:15:37.169 fused_ordering(826) 00:15:37.169 fused_ordering(827) 00:15:37.169 fused_ordering(828) 00:15:37.169 fused_ordering(829) 00:15:37.169 fused_ordering(830) 00:15:37.169 fused_ordering(831) 00:15:37.169 fused_ordering(832) 00:15:37.169 fused_ordering(833) 00:15:37.169 fused_ordering(834) 00:15:37.169 fused_ordering(835) 00:15:37.169 fused_ordering(836) 00:15:37.169 fused_ordering(837) 00:15:37.169 fused_ordering(838) 00:15:37.169 fused_ordering(839) 00:15:37.169 fused_ordering(840) 00:15:37.169 fused_ordering(841) 00:15:37.169 fused_ordering(842) 00:15:37.169 fused_ordering(843) 00:15:37.169 fused_ordering(844) 00:15:37.169 fused_ordering(845) 00:15:37.169 fused_ordering(846) 00:15:37.169 fused_ordering(847) 00:15:37.169 fused_ordering(848) 00:15:37.169 fused_ordering(849) 00:15:37.169 fused_ordering(850) 00:15:37.169 fused_ordering(851) 00:15:37.169 fused_ordering(852) 00:15:37.169 fused_ordering(853) 00:15:37.169 fused_ordering(854) 00:15:37.169 fused_ordering(855) 00:15:37.169 fused_ordering(856) 00:15:37.169 fused_ordering(857) 00:15:37.169 fused_ordering(858) 00:15:37.169 fused_ordering(859) 00:15:37.169 fused_ordering(860) 00:15:37.169 fused_ordering(861) 00:15:37.169 fused_ordering(862) 00:15:37.169 fused_ordering(863) 00:15:37.169 fused_ordering(864) 00:15:37.169 fused_ordering(865) 00:15:37.169 fused_ordering(866) 00:15:37.169 fused_ordering(867) 00:15:37.169 fused_ordering(868) 00:15:37.169 fused_ordering(869) 00:15:37.169 fused_ordering(870) 00:15:37.169 fused_ordering(871) 00:15:37.169 fused_ordering(872) 00:15:37.169 fused_ordering(873) 00:15:37.169 fused_ordering(874) 00:15:37.169 fused_ordering(875) 00:15:37.169 fused_ordering(876) 00:15:37.169 fused_ordering(877) 00:15:37.169 fused_ordering(878) 00:15:37.169 fused_ordering(879) 00:15:37.169 fused_ordering(880) 00:15:37.169 fused_ordering(881) 00:15:37.169 fused_ordering(882) 00:15:37.169 fused_ordering(883) 00:15:37.169 fused_ordering(884) 00:15:37.169 fused_ordering(885) 00:15:37.169 fused_ordering(886) 00:15:37.170 fused_ordering(887) 00:15:37.170 fused_ordering(888) 00:15:37.170 fused_ordering(889) 00:15:37.170 fused_ordering(890) 00:15:37.170 fused_ordering(891) 00:15:37.170 fused_ordering(892) 00:15:37.170 fused_ordering(893) 00:15:37.170 fused_ordering(894) 00:15:37.170 fused_ordering(895) 00:15:37.170 fused_ordering(896) 00:15:37.170 fused_ordering(897) 00:15:37.170 fused_ordering(898) 00:15:37.170 fused_ordering(899) 00:15:37.170 fused_ordering(900) 00:15:37.170 fused_ordering(901) 00:15:37.170 fused_ordering(902) 00:15:37.170 fused_ordering(903) 00:15:37.170 fused_ordering(904) 00:15:37.170 fused_ordering(905) 00:15:37.170 fused_ordering(906) 00:15:37.170 fused_ordering(907) 00:15:37.170 fused_ordering(908) 00:15:37.170 fused_ordering(909) 00:15:37.170 fused_ordering(910) 00:15:37.170 fused_ordering(911) 00:15:37.170 fused_ordering(912) 00:15:37.170 fused_ordering(913) 00:15:37.170 fused_ordering(914) 00:15:37.170 fused_ordering(915) 00:15:37.170 fused_ordering(916) 00:15:37.170 fused_ordering(917) 00:15:37.170 fused_ordering(918) 00:15:37.170 fused_ordering(919) 00:15:37.170 fused_ordering(920) 00:15:37.170 fused_ordering(921) 00:15:37.170 fused_ordering(922) 00:15:37.170 fused_ordering(923) 00:15:37.170 fused_ordering(924) 00:15:37.170 fused_ordering(925) 00:15:37.170 fused_ordering(926) 00:15:37.170 fused_ordering(927) 00:15:37.170 fused_ordering(928) 00:15:37.170 fused_ordering(929) 00:15:37.170 fused_ordering(930) 00:15:37.170 fused_ordering(931) 00:15:37.170 fused_ordering(932) 00:15:37.170 fused_ordering(933) 00:15:37.170 fused_ordering(934) 00:15:37.170 fused_ordering(935) 00:15:37.170 fused_ordering(936) 00:15:37.170 fused_ordering(937) 00:15:37.170 fused_ordering(938) 00:15:37.170 fused_ordering(939) 00:15:37.170 fused_ordering(940) 00:15:37.170 fused_ordering(941) 00:15:37.170 fused_ordering(942) 00:15:37.170 fused_ordering(943) 00:15:37.170 fused_ordering(944) 00:15:37.170 fused_ordering(945) 00:15:37.170 fused_ordering(946) 00:15:37.170 fused_ordering(947) 00:15:37.170 fused_ordering(948) 00:15:37.170 fused_ordering(949) 00:15:37.170 fused_ordering(950) 00:15:37.170 fused_ordering(951) 00:15:37.170 fused_ordering(952) 00:15:37.170 fused_ordering(953) 00:15:37.170 fused_ordering(954) 00:15:37.170 fused_ordering(955) 00:15:37.170 fused_ordering(956) 00:15:37.170 fused_ordering(957) 00:15:37.170 fused_ordering(958) 00:15:37.170 fused_ordering(959) 00:15:37.170 fused_ordering(960) 00:15:37.170 fused_ordering(961) 00:15:37.170 fused_ordering(962) 00:15:37.170 fused_ordering(963) 00:15:37.170 fused_ordering(964) 00:15:37.170 fused_ordering(965) 00:15:37.170 fused_ordering(966) 00:15:37.170 fused_ordering(967) 00:15:37.170 fused_ordering(968) 00:15:37.170 fused_ordering(969) 00:15:37.170 fused_ordering(970) 00:15:37.170 fused_ordering(971) 00:15:37.170 fused_ordering(972) 00:15:37.170 fused_ordering(973) 00:15:37.170 fused_ordering(974) 00:15:37.170 fused_ordering(975) 00:15:37.170 fused_ordering(976) 00:15:37.170 fused_ordering(977) 00:15:37.170 fused_ordering(978) 00:15:37.170 fused_ordering(979) 00:15:37.170 fused_ordering(980) 00:15:37.170 fused_ordering(981) 00:15:37.170 fused_ordering(982) 00:15:37.170 fused_ordering(983) 00:15:37.170 fused_ordering(984) 00:15:37.170 fused_ordering(985) 00:15:37.170 fused_ordering(986) 00:15:37.170 fused_ordering(987) 00:15:37.170 fused_ordering(988) 00:15:37.170 fused_ordering(989) 00:15:37.170 fused_ordering(990) 00:15:37.170 fused_ordering(991) 00:15:37.170 fused_ordering(992) 00:15:37.170 fused_ordering(993) 00:15:37.170 fused_ordering(994) 00:15:37.170 fused_ordering(995) 00:15:37.170 fused_ordering(996) 00:15:37.170 fused_ordering(997) 00:15:37.170 fused_ordering(998) 00:15:37.170 fused_ordering(999) 00:15:37.170 fused_ordering(1000) 00:15:37.170 fused_ordering(1001) 00:15:37.170 fused_ordering(1002) 00:15:37.170 fused_ordering(1003) 00:15:37.170 fused_ordering(1004) 00:15:37.170 fused_ordering(1005) 00:15:37.170 fused_ordering(1006) 00:15:37.170 fused_ordering(1007) 00:15:37.170 fused_ordering(1008) 00:15:37.170 fused_ordering(1009) 00:15:37.170 fused_ordering(1010) 00:15:37.170 fused_ordering(1011) 00:15:37.170 fused_ordering(1012) 00:15:37.170 fused_ordering(1013) 00:15:37.170 fused_ordering(1014) 00:15:37.170 fused_ordering(1015) 00:15:37.170 fused_ordering(1016) 00:15:37.170 fused_ordering(1017) 00:15:37.170 fused_ordering(1018) 00:15:37.170 fused_ordering(1019) 00:15:37.170 fused_ordering(1020) 00:15:37.170 fused_ordering(1021) 00:15:37.170 fused_ordering(1022) 00:15:37.170 fused_ordering(1023) 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.170 rmmod nvme_tcp 00:15:37.170 rmmod nvme_fabrics 00:15:37.170 rmmod nvme_keyring 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1248198 ']' 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1248198 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1248198 ']' 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1248198 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1248198 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1248198' 00:15:37.170 killing process with pid 1248198 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1248198 00:15:37.170 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1248198 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:37.432 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.348 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:39.348 00:15:39.348 real 0m13.884s 00:15:39.348 user 0m7.700s 00:15:39.348 sys 0m7.698s 00:15:39.348 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:39.348 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:39.348 ************************************ 00:15:39.348 END TEST nvmf_fused_ordering 00:15:39.348 ************************************ 00:15:39.348 10:04:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:39.348 10:04:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:39.349 10:04:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:39.349 10:04:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.349 ************************************ 00:15:39.349 START TEST nvmf_ns_masking 00:15:39.349 ************************************ 00:15:39.349 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:39.610 * Looking for test storage... 00:15:39.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.610 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=1b536902-0e30-4279-bbc7-78265dd78245 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1a741166-e7f1-4ee5-a8a3-3632c2c739f2 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f2941c9f-7dc3-499f-9014-e56e2e37ad5d 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:39.611 10:04:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:47.757 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:47.757 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:47.757 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:47.757 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.757 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.816 ms 00:15:47.758 00:15:47.758 --- 10.0.0.2 ping statistics --- 00:15:47.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.758 rtt min/avg/max/mdev = 0.816/0.816/0.816/0.000 ms 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:15:47.758 00:15:47.758 --- 10.0.0.1 ping statistics --- 00:15:47.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.758 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1253217 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1253217 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1253217 ']' 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:47.758 10:04:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.758 [2024-07-25 10:04:25.919603] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:47.758 [2024-07-25 10:04:25.919664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.758 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.758 [2024-07-25 10:04:25.994404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.758 [2024-07-25 10:04:26.062223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.758 [2024-07-25 10:04:26.062264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.758 [2024-07-25 10:04:26.062271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.758 [2024-07-25 10:04:26.062277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.758 [2024-07-25 10:04:26.062283] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.758 [2024-07-25 10:04:26.062302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.758 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:47.758 [2024-07-25 10:04:26.877640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.019 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:48.019 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:48.019 10:04:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:48.019 Malloc1 00:15:48.019 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:48.280 Malloc2 00:15:48.280 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:48.540 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:48.540 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:48.800 [2024-07-25 10:04:27.740934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.800 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:48.800 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f2941c9f-7dc3-499f-9014-e56e2e37ad5d -a 10.0.0.2 -s 4420 -i 4 00:15:49.060 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:49.060 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:49.060 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:49.060 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:49.060 10:04:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:50.971 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:50.971 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:50.971 10:04:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.971 [ 0]:0x1 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.971 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=298f8c4383704f51b9c5258b476670ea 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 298f8c4383704f51b9c5258b476670ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:51.231 [ 0]:0x1 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=298f8c4383704f51b9c5258b476670ea 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 298f8c4383704f51b9c5258b476670ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:51.231 [ 1]:0x2 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.231 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.492 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:51.492 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.492 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:51.492 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.492 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:51.753 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:51.753 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:51.753 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f2941c9f-7dc3-499f-9014-e56e2e37ad5d -a 10.0.0.2 -s 4420 -i 4 00:15:52.014 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:52.014 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:52.014 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.014 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:52.014 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:52.014 10:04:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:53.928 10:04:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:53.928 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.189 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.190 [ 0]:0x2 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.190 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:54.450 [ 0]:0x1 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=298f8c4383704f51b9c5258b476670ea 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 298f8c4383704f51b9c5258b476670ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:54.450 [ 1]:0x2 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.450 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:54.711 [ 0]:0x2 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:54.711 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:54.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.973 10:04:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:54.973 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:54.973 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f2941c9f-7dc3-499f-9014-e56e2e37ad5d -a 10.0.0.2 -s 4420 -i 4 00:15:55.234 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:55.234 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:55.234 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:55.234 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:55.234 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:55.234 10:04:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:57.148 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:57.409 [ 0]:0x1 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=298f8c4383704f51b9c5258b476670ea 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 298f8c4383704f51b9c5258b476670ea != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:57.409 [ 1]:0x2 00:15:57.409 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:57.410 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.410 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:57.410 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.410 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:57.670 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:57.671 [ 0]:0x2 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:57.671 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:57.932 [2024-07-25 10:04:36.859005] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:57.932 request: 00:15:57.932 { 00:15:57.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:57.932 "nsid": 2, 00:15:57.932 "host": "nqn.2016-06.io.spdk:host1", 00:15:57.932 "method": "nvmf_ns_remove_host", 00:15:57.932 "req_id": 1 00:15:57.932 } 00:15:57.932 Got JSON-RPC error response 00:15:57.932 response: 00:15:57.932 { 00:15:57.932 "code": -32602, 00:15:57.932 "message": "Invalid parameters" 00:15:57.932 } 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:57.932 10:04:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:57.932 [ 0]:0x2 00:15:57.932 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:57.933 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:57.933 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a1e62f37ae894dbf8ec6a12846c437e7 00:15:57.933 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a1e62f37ae894dbf8ec6a12846c437e7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:57.933 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:57.933 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:58.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1255522 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1255522 /var/tmp/host.sock 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1255522 ']' 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:58.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:58.194 10:04:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:58.194 [2024-07-25 10:04:37.256236] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:58.194 [2024-07-25 10:04:37.256289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1255522 ] 00:15:58.194 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.456 [2024-07-25 10:04:37.332663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.456 [2024-07-25 10:04:37.396910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.029 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:59.029 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:59.029 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.299 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.299 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 1b536902-0e30-4279-bbc7-78265dd78245 00:15:59.299 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:59.299 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 1B5369020E304279BBC778265DD78245 -i 00:15:59.591 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1a741166-e7f1-4ee5-a8a3-3632c2c739f2 00:15:59.591 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:15:59.591 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1A741166E7F14EE5A8A33632C2C739F2 -i 00:15:59.591 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:59.851 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:59.851 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:59.851 10:04:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:00.112 nvme0n1 00:16:00.112 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:00.112 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:00.373 nvme1n2 00:16:00.373 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:00.373 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:00.373 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:00.373 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:00.373 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:00.635 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:00.635 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:00.635 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:00.635 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 1b536902-0e30-4279-bbc7-78265dd78245 == \1\b\5\3\6\9\0\2\-\0\e\3\0\-\4\2\7\9\-\b\b\c\7\-\7\8\2\6\5\d\d\7\8\2\4\5 ]] 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1a741166-e7f1-4ee5-a8a3-3632c2c739f2 == \1\a\7\4\1\1\6\6\-\e\7\f\1\-\4\e\e\5\-\a\8\a\3\-\3\6\3\2\c\2\c\7\3\9\f\2 ]] 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1255522 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1255522 ']' 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1255522 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.896 10:04:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1255522 00:16:01.158 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:16:01.158 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:16:01.158 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1255522' 00:16:01.158 killing process with pid 1255522 00:16:01.158 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1255522 00:16:01.158 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1255522 00:16:01.158 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:01.418 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:01.419 rmmod nvme_tcp 00:16:01.419 rmmod nvme_fabrics 00:16:01.419 rmmod nvme_keyring 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1253217 ']' 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1253217 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1253217 ']' 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1253217 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.419 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1253217 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1253217' 00:16:01.680 killing process with pid 1253217 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1253217 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1253217 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.680 10:04:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:04.257 00:16:04.257 real 0m24.346s 00:16:04.257 user 0m24.238s 00:16:04.257 sys 0m7.388s 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:04.257 ************************************ 00:16:04.257 END TEST nvmf_ns_masking 00:16:04.257 ************************************ 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.257 ************************************ 00:16:04.257 START TEST nvmf_nvme_cli 00:16:04.257 ************************************ 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:04.257 * Looking for test storage... 00:16:04.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.257 10:04:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:04.257 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.258 10:04:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:10.851 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:10.851 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:10.851 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:10.851 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:10.851 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:10.852 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:11.113 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:11.113 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:11.113 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:11.113 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:11.113 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:11.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:11.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.656 ms 00:16:11.375 00:16:11.375 --- 10.0.0.2 ping statistics --- 00:16:11.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.375 rtt min/avg/max/mdev = 0.656/0.656/0.656/0.000 ms 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:11.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:11.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.382 ms 00:16:11.375 00:16:11.375 --- 10.0.0.1 ping statistics --- 00:16:11.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:11.375 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1260420 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1260420 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1260420 ']' 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:11.375 10:04:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:11.375 [2024-07-25 10:04:50.420314] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:11.375 [2024-07-25 10:04:50.420368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.375 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.375 [2024-07-25 10:04:50.486953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.637 [2024-07-25 10:04:50.552504] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.637 [2024-07-25 10:04:50.552543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.637 [2024-07-25 10:04:50.552550] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.637 [2024-07-25 10:04:50.552556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.637 [2024-07-25 10:04:50.552562] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.637 [2024-07-25 10:04:50.552701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.637 [2024-07-25 10:04:50.552832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.637 [2024-07-25 10:04:50.552978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.637 [2024-07-25 10:04:50.552979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 [2024-07-25 10:04:51.230206] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 Malloc0 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 Malloc1 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 [2024-07-25 10:04:51.296061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.210 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:12.472 00:16:12.472 Discovery Log Number of Records 2, Generation counter 2 00:16:12.472 =====Discovery Log Entry 0====== 00:16:12.472 trtype: tcp 00:16:12.472 adrfam: ipv4 00:16:12.472 subtype: current discovery subsystem 00:16:12.472 treq: not required 00:16:12.472 portid: 0 00:16:12.472 trsvcid: 4420 00:16:12.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:12.472 traddr: 10.0.0.2 00:16:12.472 eflags: explicit discovery connections, duplicate discovery information 00:16:12.472 sectype: none 00:16:12.472 =====Discovery Log Entry 1====== 00:16:12.472 trtype: tcp 00:16:12.472 adrfam: ipv4 00:16:12.472 subtype: nvme subsystem 00:16:12.472 treq: not required 00:16:12.472 portid: 0 00:16:12.472 trsvcid: 4420 00:16:12.472 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:12.472 traddr: 10.0.0.2 00:16:12.472 eflags: none 00:16:12.472 sectype: none 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:12.472 10:04:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:13.860 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:13.860 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.860 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.860 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:16:13.860 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:16:13.860 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.407 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.407 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:16.407 /dev/nvme0n1 ]] 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:16.408 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:16.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.669 rmmod nvme_tcp 00:16:16.669 rmmod nvme_fabrics 00:16:16.669 rmmod nvme_keyring 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1260420 ']' 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1260420 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1260420 ']' 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1260420 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1260420 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1260420' 00:16:16.669 killing process with pid 1260420 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1260420 00:16:16.669 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1260420 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.930 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.480 00:16:19.480 real 0m15.143s 00:16:19.480 user 0m23.539s 00:16:19.480 sys 0m6.075s 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:19.480 ************************************ 00:16:19.480 END TEST nvmf_nvme_cli 00:16:19.480 ************************************ 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:19.480 ************************************ 00:16:19.480 START TEST nvmf_vfio_user 00:16:19.480 ************************************ 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:19.480 * Looking for test storage... 00:16:19.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1262075 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1262075' 00:16:19.480 Process pid: 1262075 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1262075 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1262075 ']' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:19.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:19.480 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:19.480 [2024-07-25 10:04:58.303708] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:19.480 [2024-07-25 10:04:58.303781] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:19.480 EAL: No free 2048 kB hugepages reported on node 1 00:16:19.480 [2024-07-25 10:04:58.370922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:19.480 [2024-07-25 10:04:58.447852] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:19.480 [2024-07-25 10:04:58.447897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:19.480 [2024-07-25 10:04:58.447904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:19.481 [2024-07-25 10:04:58.447910] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:19.481 [2024-07-25 10:04:58.447916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:19.481 [2024-07-25 10:04:58.448061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:19.481 [2024-07-25 10:04:58.448177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:19.481 [2024-07-25 10:04:58.448335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:19.481 [2024-07-25 10:04:58.448446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.051 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:20.051 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:20.051 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:20.995 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:21.255 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:21.255 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:21.255 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:21.255 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:21.255 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:21.516 Malloc1 00:16:21.516 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:21.516 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:21.777 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:22.038 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:22.038 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:22.038 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:22.038 Malloc2 00:16:22.038 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:22.299 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:22.560 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:22.560 [2024-07-25 10:05:01.660904] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:22.560 [2024-07-25 10:05:01.660942] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262754 ] 00:16:22.560 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.825 [2024-07-25 10:05:01.693848] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:22.825 [2024-07-25 10:05:01.701504] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:22.825 [2024-07-25 10:05:01.701523] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0e67231000 00:16:22.825 [2024-07-25 10:05:01.702507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.703506] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.704507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.705507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.708208] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.708526] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.709533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.710534] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:22.825 [2024-07-25 10:05:01.711547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:22.825 [2024-07-25 10:05:01.711556] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0e67226000 00:16:22.825 [2024-07-25 10:05:01.712883] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:22.825 [2024-07-25 10:05:01.733831] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:22.825 [2024-07-25 10:05:01.733856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:22.825 [2024-07-25 10:05:01.736702] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:22.825 [2024-07-25 10:05:01.736752] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:22.825 [2024-07-25 10:05:01.736840] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:22.825 [2024-07-25 10:05:01.736855] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:22.825 [2024-07-25 10:05:01.736860] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:22.825 [2024-07-25 10:05:01.737702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:22.825 [2024-07-25 10:05:01.737715] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:22.825 [2024-07-25 10:05:01.737722] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:22.825 [2024-07-25 10:05:01.738707] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:22.825 [2024-07-25 10:05:01.738716] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:22.825 [2024-07-25 10:05:01.738723] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:22.825 [2024-07-25 10:05:01.739709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:22.825 [2024-07-25 10:05:01.739718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:22.825 [2024-07-25 10:05:01.740719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:22.825 [2024-07-25 10:05:01.740727] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:22.825 [2024-07-25 10:05:01.740732] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:22.825 [2024-07-25 10:05:01.740739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:22.825 [2024-07-25 10:05:01.740844] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:22.826 [2024-07-25 10:05:01.740849] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:22.826 [2024-07-25 10:05:01.740854] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:22.826 [2024-07-25 10:05:01.741722] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:22.826 [2024-07-25 10:05:01.742732] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:22.826 [2024-07-25 10:05:01.743736] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:22.826 [2024-07-25 10:05:01.744738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:22.826 [2024-07-25 10:05:01.744798] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:22.826 [2024-07-25 10:05:01.745754] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:22.826 [2024-07-25 10:05:01.745763] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:22.826 [2024-07-25 10:05:01.745768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.745789] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:22.826 [2024-07-25 10:05:01.745801] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.745815] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.826 [2024-07-25 10:05:01.745820] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.826 [2024-07-25 10:05:01.745824] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.826 [2024-07-25 10:05:01.745837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.745874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.745882] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:22.826 [2024-07-25 10:05:01.745887] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:22.826 [2024-07-25 10:05:01.745891] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:22.826 [2024-07-25 10:05:01.745896] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:22.826 [2024-07-25 10:05:01.745901] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:22.826 [2024-07-25 10:05:01.745905] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:22.826 [2024-07-25 10:05:01.745910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.745917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.745929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.745939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.745953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.826 [2024-07-25 10:05:01.745964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.826 [2024-07-25 10:05:01.745972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.826 [2024-07-25 10:05:01.745981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:22.826 [2024-07-25 10:05:01.745985] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.745994] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746018] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:22.826 [2024-07-25 10:05:01.746023] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746031] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746037] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746046] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746133] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:22.826 [2024-07-25 10:05:01.746137] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:22.826 [2024-07-25 10:05:01.746141] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.826 [2024-07-25 10:05:01.746147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746169] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:22.826 [2024-07-25 10:05:01.746177] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746185] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746192] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.826 [2024-07-25 10:05:01.746196] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.826 [2024-07-25 10:05:01.746200] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.826 [2024-07-25 10:05:01.746213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746243] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746258] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:22.826 [2024-07-25 10:05:01.746262] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.826 [2024-07-25 10:05:01.746265] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.826 [2024-07-25 10:05:01.746272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746290] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746297] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746312] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746317] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746322] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746327] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:22.826 [2024-07-25 10:05:01.746332] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:22.826 [2024-07-25 10:05:01.746337] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:22.826 [2024-07-25 10:05:01.746355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:22.826 [2024-07-25 10:05:01.746396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:22.826 [2024-07-25 10:05:01.746403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:22.827 [2024-07-25 10:05:01.746414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:22.827 [2024-07-25 10:05:01.746425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:22.827 [2024-07-25 10:05:01.746438] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:22.827 [2024-07-25 10:05:01.746442] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:22.827 [2024-07-25 10:05:01.746446] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:22.827 [2024-07-25 10:05:01.746450] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:22.827 [2024-07-25 10:05:01.746453] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:22.827 [2024-07-25 10:05:01.746459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:22.827 [2024-07-25 10:05:01.746467] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:22.827 [2024-07-25 10:05:01.746471] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:22.827 [2024-07-25 10:05:01.746474] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.827 [2024-07-25 10:05:01.746480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:22.827 [2024-07-25 10:05:01.746487] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:22.827 [2024-07-25 10:05:01.746492] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:22.827 [2024-07-25 10:05:01.746495] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.827 [2024-07-25 10:05:01.746501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:22.827 [2024-07-25 10:05:01.746509] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:22.827 [2024-07-25 10:05:01.746513] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:22.827 [2024-07-25 10:05:01.746516] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:22.827 [2024-07-25 10:05:01.746522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:22.827 [2024-07-25 10:05:01.746529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:22.827 [2024-07-25 10:05:01.746541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:22.827 [2024-07-25 10:05:01.746553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:22.827 [2024-07-25 10:05:01.746560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:22.827 ===================================================== 00:16:22.827 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:22.827 ===================================================== 00:16:22.827 Controller Capabilities/Features 00:16:22.827 ================================ 00:16:22.827 Vendor ID: 4e58 00:16:22.827 Subsystem Vendor ID: 4e58 00:16:22.827 Serial Number: SPDK1 00:16:22.827 Model Number: SPDK bdev Controller 00:16:22.827 Firmware Version: 24.09 00:16:22.827 Recommended Arb Burst: 6 00:16:22.827 IEEE OUI Identifier: 8d 6b 50 00:16:22.827 Multi-path I/O 00:16:22.827 May have multiple subsystem ports: Yes 00:16:22.827 May have multiple controllers: Yes 00:16:22.827 Associated with SR-IOV VF: No 00:16:22.827 Max Data Transfer Size: 131072 00:16:22.827 Max Number of Namespaces: 32 00:16:22.827 Max Number of I/O Queues: 127 00:16:22.827 NVMe Specification Version (VS): 1.3 00:16:22.827 NVMe Specification Version (Identify): 1.3 00:16:22.827 Maximum Queue Entries: 256 00:16:22.827 Contiguous Queues Required: Yes 00:16:22.827 Arbitration Mechanisms Supported 00:16:22.827 Weighted Round Robin: Not Supported 00:16:22.827 Vendor Specific: Not Supported 00:16:22.827 Reset Timeout: 15000 ms 00:16:22.827 Doorbell Stride: 4 bytes 00:16:22.827 NVM Subsystem Reset: Not Supported 00:16:22.827 Command Sets Supported 00:16:22.827 NVM Command Set: Supported 00:16:22.827 Boot Partition: Not Supported 00:16:22.827 Memory Page Size Minimum: 4096 bytes 00:16:22.827 Memory Page Size Maximum: 4096 bytes 00:16:22.827 Persistent Memory Region: Not Supported 00:16:22.827 Optional Asynchronous Events Supported 00:16:22.827 Namespace Attribute Notices: Supported 00:16:22.827 Firmware Activation Notices: Not Supported 00:16:22.827 ANA Change Notices: Not Supported 00:16:22.827 PLE Aggregate Log Change Notices: Not Supported 00:16:22.827 LBA Status Info Alert Notices: Not Supported 00:16:22.827 EGE Aggregate Log Change Notices: Not Supported 00:16:22.827 Normal NVM Subsystem Shutdown event: Not Supported 00:16:22.827 Zone Descriptor Change Notices: Not Supported 00:16:22.827 Discovery Log Change Notices: Not Supported 00:16:22.827 Controller Attributes 00:16:22.827 128-bit Host Identifier: Supported 00:16:22.827 Non-Operational Permissive Mode: Not Supported 00:16:22.827 NVM Sets: Not Supported 00:16:22.827 Read Recovery Levels: Not Supported 00:16:22.827 Endurance Groups: Not Supported 00:16:22.827 Predictable Latency Mode: Not Supported 00:16:22.827 Traffic Based Keep ALive: Not Supported 00:16:22.827 Namespace Granularity: Not Supported 00:16:22.827 SQ Associations: Not Supported 00:16:22.827 UUID List: Not Supported 00:16:22.827 Multi-Domain Subsystem: Not Supported 00:16:22.827 Fixed Capacity Management: Not Supported 00:16:22.827 Variable Capacity Management: Not Supported 00:16:22.827 Delete Endurance Group: Not Supported 00:16:22.827 Delete NVM Set: Not Supported 00:16:22.827 Extended LBA Formats Supported: Not Supported 00:16:22.827 Flexible Data Placement Supported: Not Supported 00:16:22.827 00:16:22.827 Controller Memory Buffer Support 00:16:22.827 ================================ 00:16:22.827 Supported: No 00:16:22.827 00:16:22.827 Persistent Memory Region Support 00:16:22.827 ================================ 00:16:22.827 Supported: No 00:16:22.827 00:16:22.827 Admin Command Set Attributes 00:16:22.827 ============================ 00:16:22.827 Security Send/Receive: Not Supported 00:16:22.827 Format NVM: Not Supported 00:16:22.827 Firmware Activate/Download: Not Supported 00:16:22.827 Namespace Management: Not Supported 00:16:22.827 Device Self-Test: Not Supported 00:16:22.827 Directives: Not Supported 00:16:22.827 NVMe-MI: Not Supported 00:16:22.827 Virtualization Management: Not Supported 00:16:22.827 Doorbell Buffer Config: Not Supported 00:16:22.827 Get LBA Status Capability: Not Supported 00:16:22.827 Command & Feature Lockdown Capability: Not Supported 00:16:22.827 Abort Command Limit: 4 00:16:22.827 Async Event Request Limit: 4 00:16:22.827 Number of Firmware Slots: N/A 00:16:22.827 Firmware Slot 1 Read-Only: N/A 00:16:22.827 Firmware Activation Without Reset: N/A 00:16:22.827 Multiple Update Detection Support: N/A 00:16:22.827 Firmware Update Granularity: No Information Provided 00:16:22.827 Per-Namespace SMART Log: No 00:16:22.827 Asymmetric Namespace Access Log Page: Not Supported 00:16:22.827 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:22.827 Command Effects Log Page: Supported 00:16:22.827 Get Log Page Extended Data: Supported 00:16:22.827 Telemetry Log Pages: Not Supported 00:16:22.827 Persistent Event Log Pages: Not Supported 00:16:22.827 Supported Log Pages Log Page: May Support 00:16:22.827 Commands Supported & Effects Log Page: Not Supported 00:16:22.827 Feature Identifiers & Effects Log Page:May Support 00:16:22.827 NVMe-MI Commands & Effects Log Page: May Support 00:16:22.827 Data Area 4 for Telemetry Log: Not Supported 00:16:22.827 Error Log Page Entries Supported: 128 00:16:22.827 Keep Alive: Supported 00:16:22.827 Keep Alive Granularity: 10000 ms 00:16:22.827 00:16:22.827 NVM Command Set Attributes 00:16:22.827 ========================== 00:16:22.827 Submission Queue Entry Size 00:16:22.827 Max: 64 00:16:22.827 Min: 64 00:16:22.827 Completion Queue Entry Size 00:16:22.827 Max: 16 00:16:22.827 Min: 16 00:16:22.827 Number of Namespaces: 32 00:16:22.827 Compare Command: Supported 00:16:22.827 Write Uncorrectable Command: Not Supported 00:16:22.827 Dataset Management Command: Supported 00:16:22.827 Write Zeroes Command: Supported 00:16:22.827 Set Features Save Field: Not Supported 00:16:22.827 Reservations: Not Supported 00:16:22.827 Timestamp: Not Supported 00:16:22.827 Copy: Supported 00:16:22.827 Volatile Write Cache: Present 00:16:22.827 Atomic Write Unit (Normal): 1 00:16:22.827 Atomic Write Unit (PFail): 1 00:16:22.827 Atomic Compare & Write Unit: 1 00:16:22.827 Fused Compare & Write: Supported 00:16:22.827 Scatter-Gather List 00:16:22.827 SGL Command Set: Supported (Dword aligned) 00:16:22.827 SGL Keyed: Not Supported 00:16:22.827 SGL Bit Bucket Descriptor: Not Supported 00:16:22.827 SGL Metadata Pointer: Not Supported 00:16:22.827 Oversized SGL: Not Supported 00:16:22.827 SGL Metadata Address: Not Supported 00:16:22.828 SGL Offset: Not Supported 00:16:22.828 Transport SGL Data Block: Not Supported 00:16:22.828 Replay Protected Memory Block: Not Supported 00:16:22.828 00:16:22.828 Firmware Slot Information 00:16:22.828 ========================= 00:16:22.828 Active slot: 1 00:16:22.828 Slot 1 Firmware Revision: 24.09 00:16:22.828 00:16:22.828 00:16:22.828 Commands Supported and Effects 00:16:22.828 ============================== 00:16:22.828 Admin Commands 00:16:22.828 -------------- 00:16:22.828 Get Log Page (02h): Supported 00:16:22.828 Identify (06h): Supported 00:16:22.828 Abort (08h): Supported 00:16:22.828 Set Features (09h): Supported 00:16:22.828 Get Features (0Ah): Supported 00:16:22.828 Asynchronous Event Request (0Ch): Supported 00:16:22.828 Keep Alive (18h): Supported 00:16:22.828 I/O Commands 00:16:22.828 ------------ 00:16:22.828 Flush (00h): Supported LBA-Change 00:16:22.828 Write (01h): Supported LBA-Change 00:16:22.828 Read (02h): Supported 00:16:22.828 Compare (05h): Supported 00:16:22.828 Write Zeroes (08h): Supported LBA-Change 00:16:22.828 Dataset Management (09h): Supported LBA-Change 00:16:22.828 Copy (19h): Supported LBA-Change 00:16:22.828 00:16:22.828 Error Log 00:16:22.828 ========= 00:16:22.828 00:16:22.828 Arbitration 00:16:22.828 =========== 00:16:22.828 Arbitration Burst: 1 00:16:22.828 00:16:22.828 Power Management 00:16:22.828 ================ 00:16:22.828 Number of Power States: 1 00:16:22.828 Current Power State: Power State #0 00:16:22.828 Power State #0: 00:16:22.828 Max Power: 0.00 W 00:16:22.828 Non-Operational State: Operational 00:16:22.828 Entry Latency: Not Reported 00:16:22.828 Exit Latency: Not Reported 00:16:22.828 Relative Read Throughput: 0 00:16:22.828 Relative Read Latency: 0 00:16:22.828 Relative Write Throughput: 0 00:16:22.828 Relative Write Latency: 0 00:16:22.828 Idle Power: Not Reported 00:16:22.828 Active Power: Not Reported 00:16:22.828 Non-Operational Permissive Mode: Not Supported 00:16:22.828 00:16:22.828 Health Information 00:16:22.828 ================== 00:16:22.828 Critical Warnings: 00:16:22.828 Available Spare Space: OK 00:16:22.828 Temperature: OK 00:16:22.828 Device Reliability: OK 00:16:22.828 Read Only: No 00:16:22.828 Volatile Memory Backup: OK 00:16:22.828 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:22.828 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:22.828 Available Spare: 0% 00:16:22.828 Available Sp[2024-07-25 10:05:01.746660] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:22.828 [2024-07-25 10:05:01.746671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:22.828 [2024-07-25 10:05:01.746698] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:22.828 [2024-07-25 10:05:01.746708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.828 [2024-07-25 10:05:01.746714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.828 [2024-07-25 10:05:01.746721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.828 [2024-07-25 10:05:01.746729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:22.828 [2024-07-25 10:05:01.746762] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:22.828 [2024-07-25 10:05:01.746771] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:22.828 [2024-07-25 10:05:01.747767] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:22.828 [2024-07-25 10:05:01.747809] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:22.828 [2024-07-25 10:05:01.747814] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:22.828 [2024-07-25 10:05:01.748773] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:22.828 [2024-07-25 10:05:01.748784] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:22.828 [2024-07-25 10:05:01.748843] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:22.828 [2024-07-25 10:05:01.753210] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:22.828 are Threshold: 0% 00:16:22.828 Life Percentage Used: 0% 00:16:22.828 Data Units Read: 0 00:16:22.828 Data Units Written: 0 00:16:22.828 Host Read Commands: 0 00:16:22.828 Host Write Commands: 0 00:16:22.828 Controller Busy Time: 0 minutes 00:16:22.828 Power Cycles: 0 00:16:22.828 Power On Hours: 0 hours 00:16:22.828 Unsafe Shutdowns: 0 00:16:22.828 Unrecoverable Media Errors: 0 00:16:22.828 Lifetime Error Log Entries: 0 00:16:22.828 Warning Temperature Time: 0 minutes 00:16:22.828 Critical Temperature Time: 0 minutes 00:16:22.828 00:16:22.828 Number of Queues 00:16:22.828 ================ 00:16:22.828 Number of I/O Submission Queues: 127 00:16:22.828 Number of I/O Completion Queues: 127 00:16:22.828 00:16:22.828 Active Namespaces 00:16:22.828 ================= 00:16:22.828 Namespace ID:1 00:16:22.828 Error Recovery Timeout: Unlimited 00:16:22.828 Command Set Identifier: NVM (00h) 00:16:22.828 Deallocate: Supported 00:16:22.828 Deallocated/Unwritten Error: Not Supported 00:16:22.828 Deallocated Read Value: Unknown 00:16:22.828 Deallocate in Write Zeroes: Not Supported 00:16:22.828 Deallocated Guard Field: 0xFFFF 00:16:22.828 Flush: Supported 00:16:22.828 Reservation: Supported 00:16:22.828 Namespace Sharing Capabilities: Multiple Controllers 00:16:22.828 Size (in LBAs): 131072 (0GiB) 00:16:22.828 Capacity (in LBAs): 131072 (0GiB) 00:16:22.828 Utilization (in LBAs): 131072 (0GiB) 00:16:22.828 NGUID: BD29F8108AC347FD8B3622B9BC9197DD 00:16:22.828 UUID: bd29f810-8ac3-47fd-8b36-22b9bc9197dd 00:16:22.828 Thin Provisioning: Not Supported 00:16:22.828 Per-NS Atomic Units: Yes 00:16:22.828 Atomic Boundary Size (Normal): 0 00:16:22.828 Atomic Boundary Size (PFail): 0 00:16:22.828 Atomic Boundary Offset: 0 00:16:22.828 Maximum Single Source Range Length: 65535 00:16:22.828 Maximum Copy Length: 65535 00:16:22.828 Maximum Source Range Count: 1 00:16:22.828 NGUID/EUI64 Never Reused: No 00:16:22.828 Namespace Write Protected: No 00:16:22.828 Number of LBA Formats: 1 00:16:22.828 Current LBA Format: LBA Format #00 00:16:22.828 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:22.828 00:16:22.828 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:22.828 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.828 [2024-07-25 10:05:01.945846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:28.158 Initializing NVMe Controllers 00:16:28.158 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:28.158 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:28.158 Initialization complete. Launching workers. 00:16:28.158 ======================================================== 00:16:28.158 Latency(us) 00:16:28.158 Device Information : IOPS MiB/s Average min max 00:16:28.158 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39971.01 156.14 3202.20 845.56 7781.13 00:16:28.158 ======================================================== 00:16:28.158 Total : 39971.01 156.14 3202.20 845.56 7781.13 00:16:28.158 00:16:28.158 [2024-07-25 10:05:06.966330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:28.158 10:05:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:28.158 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.158 [2024-07-25 10:05:07.147171] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.448 Initializing NVMe Controllers 00:16:33.448 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:33.448 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:33.448 Initialization complete. Launching workers. 00:16:33.448 ======================================================== 00:16:33.448 Latency(us) 00:16:33.448 Device Information : IOPS MiB/s Average min max 00:16:33.448 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.74 7630.40 7995.44 00:16:33.448 ======================================================== 00:16:33.448 Total : 16051.20 62.70 7980.74 7630.40 7995.44 00:16:33.448 00:16:33.448 [2024-07-25 10:05:12.181975] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.448 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:33.448 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.448 [2024-07-25 10:05:12.362821] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.742 [2024-07-25 10:05:17.434430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.742 Initializing NVMe Controllers 00:16:38.742 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:38.742 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:38.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:38.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:38.742 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:38.742 Initialization complete. Launching workers. 00:16:38.742 Starting thread on core 2 00:16:38.742 Starting thread on core 3 00:16:38.742 Starting thread on core 1 00:16:38.742 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:38.742 EAL: No free 2048 kB hugepages reported on node 1 00:16:38.742 [2024-07-25 10:05:17.686555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:42.047 [2024-07-25 10:05:20.747638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:42.047 Initializing NVMe Controllers 00:16:42.047 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.047 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.047 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:42.047 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:42.047 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:42.047 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:42.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:42.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:42.047 Initialization complete. Launching workers. 00:16:42.047 Starting thread on core 1 with urgent priority queue 00:16:42.047 Starting thread on core 2 with urgent priority queue 00:16:42.047 Starting thread on core 3 with urgent priority queue 00:16:42.047 Starting thread on core 0 with urgent priority queue 00:16:42.047 SPDK bdev Controller (SPDK1 ) core 0: 9722.00 IO/s 10.29 secs/100000 ios 00:16:42.047 SPDK bdev Controller (SPDK1 ) core 1: 8920.00 IO/s 11.21 secs/100000 ios 00:16:42.047 SPDK bdev Controller (SPDK1 ) core 2: 10901.00 IO/s 9.17 secs/100000 ios 00:16:42.047 SPDK bdev Controller (SPDK1 ) core 3: 8424.33 IO/s 11.87 secs/100000 ios 00:16:42.047 ======================================================== 00:16:42.047 00:16:42.047 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:42.047 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.048 [2024-07-25 10:05:21.013717] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:42.048 Initializing NVMe Controllers 00:16:42.048 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.048 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:42.048 Namespace ID: 1 size: 0GB 00:16:42.048 Initialization complete. 00:16:42.048 INFO: using host memory buffer for IO 00:16:42.048 Hello world! 00:16:42.048 [2024-07-25 10:05:21.045881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:42.048 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:42.048 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.309 [2024-07-25 10:05:21.306618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.258 Initializing NVMe Controllers 00:16:43.258 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:43.258 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:43.258 Initialization complete. Launching workers. 00:16:43.258 submit (in ns) avg, min, max = 7429.4, 3926.7, 4003992.5 00:16:43.258 complete (in ns) avg, min, max = 20464.6, 2409.2, 7992786.7 00:16:43.258 00:16:43.258 Submit histogram 00:16:43.258 ================ 00:16:43.258 Range in us Cumulative Count 00:16:43.258 3.920 - 3.947: 1.5231% ( 292) 00:16:43.258 3.947 - 3.973: 7.1667% ( 1082) 00:16:43.258 3.973 - 4.000: 16.5345% ( 1796) 00:16:43.258 4.000 - 4.027: 29.6839% ( 2521) 00:16:43.258 4.027 - 4.053: 40.8982% ( 2150) 00:16:43.258 4.053 - 4.080: 51.4761% ( 2028) 00:16:43.258 4.080 - 4.107: 66.9205% ( 2961) 00:16:43.258 4.107 - 4.133: 81.6086% ( 2816) 00:16:43.258 4.133 - 4.160: 91.1694% ( 1833) 00:16:43.258 4.160 - 4.187: 96.0202% ( 930) 00:16:43.258 4.187 - 4.213: 97.8458% ( 350) 00:16:43.258 4.213 - 4.240: 98.5030% ( 126) 00:16:43.258 4.240 - 4.267: 98.7169% ( 41) 00:16:43.258 4.267 - 4.293: 98.7743% ( 11) 00:16:43.258 4.293 - 4.320: 98.8003% ( 5) 00:16:43.258 4.320 - 4.347: 98.8055% ( 1) 00:16:43.258 4.347 - 4.373: 98.8108% ( 1) 00:16:43.258 4.373 - 4.400: 98.8160% ( 1) 00:16:43.258 4.400 - 4.427: 98.8212% ( 1) 00:16:43.258 4.427 - 4.453: 98.8264% ( 1) 00:16:43.258 4.480 - 4.507: 98.8421% ( 3) 00:16:43.258 4.507 - 4.533: 98.8473% ( 1) 00:16:43.258 4.533 - 4.560: 98.8577% ( 2) 00:16:43.258 4.560 - 4.587: 98.8629% ( 1) 00:16:43.258 4.587 - 4.613: 98.8838% ( 4) 00:16:43.258 4.613 - 4.640: 98.8890% ( 1) 00:16:43.258 4.640 - 4.667: 98.8942% ( 1) 00:16:43.258 4.693 - 4.720: 98.9099% ( 3) 00:16:43.258 4.720 - 4.747: 98.9203% ( 2) 00:16:43.258 4.747 - 4.773: 98.9307% ( 2) 00:16:43.258 4.773 - 4.800: 98.9359% ( 1) 00:16:43.258 4.800 - 4.827: 98.9412% ( 1) 00:16:43.258 4.827 - 4.853: 98.9464% ( 1) 00:16:43.258 4.853 - 4.880: 98.9672% ( 4) 00:16:43.258 4.907 - 4.933: 98.9777% ( 2) 00:16:43.258 4.987 - 5.013: 98.9829% ( 1) 00:16:43.258 5.013 - 5.040: 98.9881% ( 1) 00:16:43.258 5.040 - 5.067: 98.9933% ( 1) 00:16:43.258 5.067 - 5.093: 99.0142% ( 4) 00:16:43.258 5.093 - 5.120: 99.0298% ( 3) 00:16:43.258 5.120 - 5.147: 99.0403% ( 2) 00:16:43.258 5.147 - 5.173: 99.0455% ( 1) 00:16:43.258 5.173 - 5.200: 99.0611% ( 3) 00:16:43.258 5.227 - 5.253: 99.0768% ( 3) 00:16:43.258 5.253 - 5.280: 99.0924% ( 3) 00:16:43.258 5.280 - 5.307: 99.1029% ( 2) 00:16:43.258 5.307 - 5.333: 99.1289% ( 5) 00:16:43.258 5.333 - 5.360: 99.1498% ( 4) 00:16:43.258 5.360 - 5.387: 99.1550% ( 1) 00:16:43.258 5.387 - 5.413: 99.1707% ( 3) 00:16:43.258 5.413 - 5.440: 99.1863% ( 3) 00:16:43.258 5.440 - 5.467: 99.2124% ( 5) 00:16:43.258 5.467 - 5.493: 99.2437% ( 6) 00:16:43.258 5.493 - 5.520: 99.2541% ( 2) 00:16:43.258 5.520 - 5.547: 99.2646% ( 2) 00:16:43.258 5.547 - 5.573: 99.2802% ( 3) 00:16:43.258 5.573 - 5.600: 99.2854% ( 1) 00:16:43.258 5.600 - 5.627: 99.2958% ( 2) 00:16:43.258 5.627 - 5.653: 99.3167% ( 4) 00:16:43.258 5.653 - 5.680: 99.3324% ( 3) 00:16:43.258 5.680 - 5.707: 99.3428% ( 2) 00:16:43.258 5.707 - 5.733: 99.3532% ( 2) 00:16:43.258 5.733 - 5.760: 99.3845% ( 6) 00:16:43.258 5.760 - 5.787: 99.4054% ( 4) 00:16:43.259 5.787 - 5.813: 99.4367% ( 6) 00:16:43.259 5.813 - 5.840: 99.4419% ( 1) 00:16:43.259 5.840 - 5.867: 99.4523% ( 2) 00:16:43.259 5.867 - 5.893: 99.4575% ( 1) 00:16:43.259 5.893 - 5.920: 99.4732% ( 3) 00:16:43.259 5.920 - 5.947: 99.4888% ( 3) 00:16:43.259 5.947 - 5.973: 99.4941% ( 1) 00:16:43.259 5.973 - 6.000: 99.5097% ( 3) 00:16:43.259 6.000 - 6.027: 99.5149% ( 1) 00:16:43.259 6.027 - 6.053: 99.5358% ( 4) 00:16:43.259 6.053 - 6.080: 99.5410% ( 1) 00:16:43.259 6.080 - 6.107: 99.5514% ( 2) 00:16:43.259 6.107 - 6.133: 99.5671% ( 3) 00:16:43.259 6.133 - 6.160: 99.5775% ( 2) 00:16:43.259 6.160 - 6.187: 99.5879% ( 2) 00:16:43.259 6.187 - 6.213: 99.5932% ( 1) 00:16:43.259 6.213 - 6.240: 99.6140% ( 4) 00:16:43.259 6.240 - 6.267: 99.6192% ( 1) 00:16:43.259 6.267 - 6.293: 99.6245% ( 1) 00:16:43.259 6.347 - 6.373: 99.6297% ( 1) 00:16:43.259 6.507 - 6.533: 99.6349% ( 1) 00:16:43.259 7.040 - 7.093: 99.6401% ( 1) 00:16:43.259 7.200 - 7.253: 99.6453% ( 1) 00:16:43.259 7.253 - 7.307: 99.6662% ( 4) 00:16:43.259 7.307 - 7.360: 99.6714% ( 1) 00:16:43.259 7.360 - 7.413: 99.6766% ( 1) 00:16:43.259 7.467 - 7.520: 99.6818% ( 1) 00:16:43.259 7.573 - 7.627: 99.6870% ( 1) 00:16:43.259 7.627 - 7.680: 99.6923% ( 1) 00:16:43.259 7.680 - 7.733: 99.6975% ( 1) 00:16:43.259 7.733 - 7.787: 99.7131% ( 3) 00:16:43.259 7.787 - 7.840: 99.7183% ( 1) 00:16:43.259 7.840 - 7.893: 99.7288% ( 2) 00:16:43.259 7.893 - 7.947: 99.7392% ( 2) 00:16:43.259 8.000 - 8.053: 99.7601% ( 4) 00:16:43.259 8.053 - 8.107: 99.7757% ( 3) 00:16:43.259 8.160 - 8.213: 99.7809% ( 1) 00:16:43.259 8.267 - 8.320: 99.7861% ( 1) 00:16:43.259 8.320 - 8.373: 99.7914% ( 1) 00:16:43.259 8.373 - 8.427: 99.7966% ( 1) 00:16:43.259 8.480 - 8.533: 99.8018% ( 1) 00:16:43.259 8.533 - 8.587: 99.8122% ( 2) 00:16:43.259 8.587 - 8.640: 99.8227% ( 2) 00:16:43.259 8.640 - 8.693: 99.8279% ( 1) 00:16:43.259 8.747 - 8.800: 99.8331% ( 1) 00:16:43.259 8.800 - 8.853: 99.8435% ( 2) 00:16:43.259 8.853 - 8.907: 99.8487% ( 1) 00:16:43.259 8.960 - 9.013: 99.8540% ( 1) 00:16:43.259 9.067 - 9.120: 99.8592% ( 1) 00:16:43.259 9.173 - 9.227: 99.8644% ( 1) 00:16:43.259 9.333 - 9.387: 99.8696% ( 1) 00:16:43.259 9.440 - 9.493: 99.8748% ( 1) 00:16:43.259 9.600 - 9.653: 99.8800% ( 1) 00:16:43.259 9.707 - 9.760: 99.8852% ( 1) 00:16:43.259 9.867 - 9.920: 99.8905% ( 1) 00:16:43.259 9.920 - 9.973: 99.8957% ( 1) 00:16:43.259 10.133 - 10.187: 99.9009% ( 1) 00:16:43.259 11.307 - 11.360: 99.9061% ( 1) 00:16:43.259 13.973 - 14.080: 99.9113% ( 1) 00:16:43.259 19.520 - 19.627: 99.9165% ( 1) 00:16:43.259 3986.773 - 4014.080: 100.0000% ( 16) 00:16:43.259 00:16:43.259 Complete histogram 00:16:43.259 ================== 00:16:43.259 Range in us Cumulative Count 00:16:43.259 2.400 - 2.413: 0.1408% ( 27) 00:16:43.259 2.413 - 2.427: 1.0588% ( 176) 00:16:43.259 2.427 - 2.440: 1.1371% ( 15) 00:16:43.259 2.440 - 2.453: 1.2831% ( 28) 00:16:43.259 2.467 - 2.480: 18.7774% ( 3354) 00:16:43.259 2.480 - 2.493: 50.4642% ( 6075) 00:16:43.259 2.493 - 2.507: 63.9266% ( 2581) 00:16:43.259 2.507 - 2.520: 77.0551% ( 2517) 00:16:43.259 2.520 - 2.533: 80.4820% ( 657) 00:16:43.259 2.533 - 2.547: 82.6570% ( 417) 00:16:43.259 2.547 - 2.560: 87.9095% ( 1007) 00:16:43.259 2.560 - 2.573: 92.8281% ( 943) 00:16:43.259 2.573 - 2.587: 95.7907% ( 568) 00:16:43.259 2.587 - 2.600: 97.9084% ( 406) 00:16:43.259 2.600 - 2.613: 98.5030% ( 114) 00:16:43.259 2.613 - 2.627: 98.6752% ( 33) 00:16:43.259 2.627 - 2.640: 98.7273% ( 10) 00:16:43.259 2.640 - 2.653: 98.7586% ( 6) 00:16:43.259 2.680 - 2.693: 98.7690% ( 2) 00:16:43.259 2.733 - 2.747: 98.7743% ( 1) 00:16:43.259 2.760 - 2.773: 98.7847% ( 2) 00:16:43.259 2.773 - 2.787: 98.7899% ( 1) 00:16:43.259 2.800 - 2.813: 98.8055% ( 3) 00:16:43.259 2.813 - 2.827: 98.8160% ( 2) 00:16:43.259 2.827 - 2.840: 98.8264% ( 2) 00:16:43.259 2.840 - 2.853: 98.8368% ( 2) 00:16:43.259 2.853 - 2.867: 98.8525% ( 3) 00:16:43.259 2.867 - 2.880: 98.8577% ( 1) 00:16:43.259 2.893 - 2.907: 98.8629% ( 1) 00:16:43.259 2.907 - 2.920: 98.8786% ( 3) 00:16:43.259 2.933 - 2.947: 98.8838% ( 1) 00:16:43.259 3.000 - 3.013: 98.8994% ( 3) 00:16:43.259 3.013 - 3.027: 98.9099% ( 2) 00:16:43.259 3.040 - 3.053: 98.9203% ( 2) 00:16:43.259 3.053 - 3.067: 98.9307% ( 2) 00:16:43.259 3.067 - 3.080: 98.9359% ( 1) 00:16:43.259 3.133 - 3.147: 98.9412% ( 1) 00:16:43.259 3.160 - 3.173: 98.9620% ( 4) 00:16:43.259 3.173 - 3.187: 98.9777% ( 3) 00:16:43.259 3.187 - 3.2[2024-07-25 10:05:22.333175] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:43.259 00: 98.9881% ( 2) 00:16:43.259 3.200 - 3.213: 98.9933% ( 1) 00:16:43.259 3.213 - 3.227: 98.9985% ( 1) 00:16:43.259 3.227 - 3.240: 99.0090% ( 2) 00:16:43.259 3.240 - 3.253: 99.0351% ( 5) 00:16:43.259 3.253 - 3.267: 99.0455% ( 2) 00:16:43.259 3.267 - 3.280: 99.0507% ( 1) 00:16:43.259 3.280 - 3.293: 99.0611% ( 2) 00:16:43.259 3.387 - 3.400: 99.0663% ( 1) 00:16:43.259 3.413 - 3.440: 99.0768% ( 2) 00:16:43.259 3.467 - 3.493: 99.0924% ( 3) 00:16:43.259 3.493 - 3.520: 99.1133% ( 4) 00:16:43.259 3.520 - 3.547: 99.1289% ( 3) 00:16:43.259 3.547 - 3.573: 99.1394% ( 2) 00:16:43.259 3.573 - 3.600: 99.1498% ( 2) 00:16:43.259 3.600 - 3.627: 99.1654% ( 3) 00:16:43.259 3.627 - 3.653: 99.1707% ( 1) 00:16:43.259 3.653 - 3.680: 99.1759% ( 1) 00:16:43.259 3.733 - 3.760: 99.1915% ( 3) 00:16:43.259 3.760 - 3.787: 99.2072% ( 3) 00:16:43.259 3.787 - 3.813: 99.2176% ( 2) 00:16:43.259 3.813 - 3.840: 99.2280% ( 2) 00:16:43.259 3.867 - 3.893: 99.2333% ( 1) 00:16:43.259 3.920 - 3.947: 99.2385% ( 1) 00:16:43.259 4.027 - 4.053: 99.2437% ( 1) 00:16:43.259 4.053 - 4.080: 99.2489% ( 1) 00:16:43.259 4.267 - 4.293: 99.2541% ( 1) 00:16:43.259 4.720 - 4.747: 99.2646% ( 2) 00:16:43.259 4.827 - 4.853: 99.2698% ( 1) 00:16:43.259 4.987 - 5.013: 99.2750% ( 1) 00:16:43.259 5.040 - 5.067: 99.2802% ( 1) 00:16:43.259 5.467 - 5.493: 99.2854% ( 1) 00:16:43.259 5.493 - 5.520: 99.2906% ( 1) 00:16:43.259 5.600 - 5.627: 99.2958% ( 1) 00:16:43.259 5.627 - 5.653: 99.3011% ( 1) 00:16:43.259 5.760 - 5.787: 99.3063% ( 1) 00:16:43.259 5.787 - 5.813: 99.3115% ( 1) 00:16:43.259 5.813 - 5.840: 99.3167% ( 1) 00:16:43.259 5.893 - 5.920: 99.3219% ( 1) 00:16:43.259 5.920 - 5.947: 99.3271% ( 1) 00:16:43.259 5.973 - 6.000: 99.3376% ( 2) 00:16:43.259 6.000 - 6.027: 99.3428% ( 1) 00:16:43.259 6.080 - 6.107: 99.3480% ( 1) 00:16:43.259 6.107 - 6.133: 99.3532% ( 1) 00:16:43.259 6.160 - 6.187: 99.3584% ( 1) 00:16:43.259 6.213 - 6.240: 99.3689% ( 2) 00:16:43.259 6.240 - 6.267: 99.3793% ( 2) 00:16:43.259 6.267 - 6.293: 99.3845% ( 1) 00:16:43.259 6.293 - 6.320: 99.3897% ( 1) 00:16:43.259 6.400 - 6.427: 99.4002% ( 2) 00:16:43.259 6.453 - 6.480: 99.4106% ( 2) 00:16:43.259 6.507 - 6.533: 99.4210% ( 2) 00:16:43.259 6.533 - 6.560: 99.4315% ( 2) 00:16:43.259 6.560 - 6.587: 99.4367% ( 1) 00:16:43.259 6.587 - 6.613: 99.4419% ( 1) 00:16:43.259 6.640 - 6.667: 99.4471% ( 1) 00:16:43.259 6.773 - 6.800: 99.4523% ( 1) 00:16:43.259 6.800 - 6.827: 99.4575% ( 1) 00:16:43.259 6.987 - 7.040: 99.4628% ( 1) 00:16:43.259 7.093 - 7.147: 99.4732% ( 2) 00:16:43.259 7.360 - 7.413: 99.4836% ( 2) 00:16:43.259 7.680 - 7.733: 99.4888% ( 1) 00:16:43.259 7.840 - 7.893: 99.4941% ( 1) 00:16:43.259 8.480 - 8.533: 99.4993% ( 1) 00:16:43.259 8.907 - 8.960: 99.5045% ( 1) 00:16:43.259 9.173 - 9.227: 99.5097% ( 1) 00:16:43.259 9.547 - 9.600: 99.5149% ( 1) 00:16:43.259 12.853 - 12.907: 99.5201% ( 1) 00:16:43.259 13.493 - 13.547: 99.5253% ( 1) 00:16:43.259 14.827 - 14.933: 99.5306% ( 1) 00:16:43.259 14.933 - 15.040: 99.5358% ( 1) 00:16:43.259 16.000 - 16.107: 99.5410% ( 1) 00:16:43.259 16.640 - 16.747: 99.5462% ( 1) 00:16:43.259 43.520 - 43.733: 99.5514% ( 1) 00:16:43.259 45.440 - 45.653: 99.5566% ( 1) 00:16:43.259 165.547 - 166.400: 99.5619% ( 1) 00:16:43.259 3986.773 - 4014.080: 99.9896% ( 82) 00:16:43.259 7973.547 - 8028.160: 100.0000% ( 2) 00:16:43.259 00:16:43.259 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:43.259 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:43.259 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:43.259 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:43.259 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:43.520 [ 00:16:43.520 { 00:16:43.520 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:43.520 "subtype": "Discovery", 00:16:43.520 "listen_addresses": [], 00:16:43.521 "allow_any_host": true, 00:16:43.521 "hosts": [] 00:16:43.521 }, 00:16:43.521 { 00:16:43.521 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:43.521 "subtype": "NVMe", 00:16:43.521 "listen_addresses": [ 00:16:43.521 { 00:16:43.521 "trtype": "VFIOUSER", 00:16:43.521 "adrfam": "IPv4", 00:16:43.521 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:43.521 "trsvcid": "0" 00:16:43.521 } 00:16:43.521 ], 00:16:43.521 "allow_any_host": true, 00:16:43.521 "hosts": [], 00:16:43.521 "serial_number": "SPDK1", 00:16:43.521 "model_number": "SPDK bdev Controller", 00:16:43.521 "max_namespaces": 32, 00:16:43.521 "min_cntlid": 1, 00:16:43.521 "max_cntlid": 65519, 00:16:43.521 "namespaces": [ 00:16:43.521 { 00:16:43.521 "nsid": 1, 00:16:43.521 "bdev_name": "Malloc1", 00:16:43.521 "name": "Malloc1", 00:16:43.521 "nguid": "BD29F8108AC347FD8B3622B9BC9197DD", 00:16:43.521 "uuid": "bd29f810-8ac3-47fd-8b36-22b9bc9197dd" 00:16:43.521 } 00:16:43.521 ] 00:16:43.521 }, 00:16:43.521 { 00:16:43.521 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:43.521 "subtype": "NVMe", 00:16:43.521 "listen_addresses": [ 00:16:43.521 { 00:16:43.521 "trtype": "VFIOUSER", 00:16:43.521 "adrfam": "IPv4", 00:16:43.521 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:43.521 "trsvcid": "0" 00:16:43.521 } 00:16:43.521 ], 00:16:43.521 "allow_any_host": true, 00:16:43.521 "hosts": [], 00:16:43.521 "serial_number": "SPDK2", 00:16:43.521 "model_number": "SPDK bdev Controller", 00:16:43.521 "max_namespaces": 32, 00:16:43.521 "min_cntlid": 1, 00:16:43.521 "max_cntlid": 65519, 00:16:43.521 "namespaces": [ 00:16:43.521 { 00:16:43.521 "nsid": 1, 00:16:43.521 "bdev_name": "Malloc2", 00:16:43.521 "name": "Malloc2", 00:16:43.521 "nguid": "7AB2BC3277D64B51A625F741A85F6177", 00:16:43.521 "uuid": "7ab2bc32-77d6-4b51-a625-f741a85f6177" 00:16:43.521 } 00:16:43.521 ] 00:16:43.521 } 00:16:43.521 ] 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1266910 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:43.521 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:43.521 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.782 Malloc3 00:16:43.782 [2024-07-25 10:05:22.724635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.782 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:43.782 [2024-07-25 10:05:22.893787] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:44.043 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:44.043 Asynchronous Event Request test 00:16:44.043 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:44.043 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:44.043 Registering asynchronous event callbacks... 00:16:44.043 Starting namespace attribute notice tests for all controllers... 00:16:44.043 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:44.043 aer_cb - Changed Namespace 00:16:44.043 Cleaning up... 00:16:44.043 [ 00:16:44.043 { 00:16:44.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:44.043 "subtype": "Discovery", 00:16:44.043 "listen_addresses": [], 00:16:44.043 "allow_any_host": true, 00:16:44.043 "hosts": [] 00:16:44.043 }, 00:16:44.043 { 00:16:44.043 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:44.043 "subtype": "NVMe", 00:16:44.043 "listen_addresses": [ 00:16:44.043 { 00:16:44.043 "trtype": "VFIOUSER", 00:16:44.043 "adrfam": "IPv4", 00:16:44.043 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:44.043 "trsvcid": "0" 00:16:44.043 } 00:16:44.043 ], 00:16:44.043 "allow_any_host": true, 00:16:44.043 "hosts": [], 00:16:44.043 "serial_number": "SPDK1", 00:16:44.043 "model_number": "SPDK bdev Controller", 00:16:44.043 "max_namespaces": 32, 00:16:44.043 "min_cntlid": 1, 00:16:44.043 "max_cntlid": 65519, 00:16:44.043 "namespaces": [ 00:16:44.043 { 00:16:44.043 "nsid": 1, 00:16:44.043 "bdev_name": "Malloc1", 00:16:44.043 "name": "Malloc1", 00:16:44.043 "nguid": "BD29F8108AC347FD8B3622B9BC9197DD", 00:16:44.043 "uuid": "bd29f810-8ac3-47fd-8b36-22b9bc9197dd" 00:16:44.043 }, 00:16:44.043 { 00:16:44.043 "nsid": 2, 00:16:44.043 "bdev_name": "Malloc3", 00:16:44.043 "name": "Malloc3", 00:16:44.043 "nguid": "2D8CA347007B44128FC1469A270043F6", 00:16:44.043 "uuid": "2d8ca347-007b-4412-8fc1-469a270043f6" 00:16:44.043 } 00:16:44.043 ] 00:16:44.043 }, 00:16:44.043 { 00:16:44.043 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:44.043 "subtype": "NVMe", 00:16:44.043 "listen_addresses": [ 00:16:44.043 { 00:16:44.043 "trtype": "VFIOUSER", 00:16:44.043 "adrfam": "IPv4", 00:16:44.043 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:44.043 "trsvcid": "0" 00:16:44.043 } 00:16:44.043 ], 00:16:44.043 "allow_any_host": true, 00:16:44.043 "hosts": [], 00:16:44.043 "serial_number": "SPDK2", 00:16:44.043 "model_number": "SPDK bdev Controller", 00:16:44.043 "max_namespaces": 32, 00:16:44.043 "min_cntlid": 1, 00:16:44.043 "max_cntlid": 65519, 00:16:44.043 "namespaces": [ 00:16:44.043 { 00:16:44.043 "nsid": 1, 00:16:44.043 "bdev_name": "Malloc2", 00:16:44.043 "name": "Malloc2", 00:16:44.043 "nguid": "7AB2BC3277D64B51A625F741A85F6177", 00:16:44.043 "uuid": "7ab2bc32-77d6-4b51-a625-f741a85f6177" 00:16:44.043 } 00:16:44.043 ] 00:16:44.043 } 00:16:44.043 ] 00:16:44.043 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1266910 00:16:44.043 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:44.044 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:44.044 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:44.044 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:44.044 [2024-07-25 10:05:23.108822] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:16:44.044 [2024-07-25 10:05:23.108865] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266955 ] 00:16:44.044 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.044 [2024-07-25 10:05:23.139755] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:44.044 [2024-07-25 10:05:23.148414] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:44.044 [2024-07-25 10:05:23.148436] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5732147000 00:16:44.044 [2024-07-25 10:05:23.149411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.150417] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.151441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.152439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.153446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.154449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.155452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.156463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:44.044 [2024-07-25 10:05:23.157470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:44.044 [2024-07-25 10:05:23.157480] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f573213c000 00:16:44.044 [2024-07-25 10:05:23.158807] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:44.044 [2024-07-25 10:05:23.175043] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:44.044 [2024-07-25 10:05:23.175063] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:44.307 [2024-07-25 10:05:23.180143] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:44.307 [2024-07-25 10:05:23.180189] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:44.307 [2024-07-25 10:05:23.180280] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:44.307 [2024-07-25 10:05:23.180295] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:44.307 [2024-07-25 10:05:23.180300] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:44.307 [2024-07-25 10:05:23.181144] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:44.307 [2024-07-25 10:05:23.181156] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:44.307 [2024-07-25 10:05:23.181163] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:44.307 [2024-07-25 10:05:23.182153] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:44.307 [2024-07-25 10:05:23.182161] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:44.307 [2024-07-25 10:05:23.182169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:44.307 [2024-07-25 10:05:23.183163] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:44.307 [2024-07-25 10:05:23.183172] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:44.307 [2024-07-25 10:05:23.184173] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:44.307 [2024-07-25 10:05:23.184183] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:44.307 [2024-07-25 10:05:23.184188] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:44.307 [2024-07-25 10:05:23.184194] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:44.307 [2024-07-25 10:05:23.184300] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:44.307 [2024-07-25 10:05:23.184304] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:44.307 [2024-07-25 10:05:23.184309] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:44.307 [2024-07-25 10:05:23.185180] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:44.307 [2024-07-25 10:05:23.186187] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:44.307 [2024-07-25 10:05:23.187188] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:44.307 [2024-07-25 10:05:23.188196] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:44.307 [2024-07-25 10:05:23.188240] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:44.308 [2024-07-25 10:05:23.189211] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:44.308 [2024-07-25 10:05:23.189220] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:44.308 [2024-07-25 10:05:23.189225] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.189246] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:44.308 [2024-07-25 10:05:23.189254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.189266] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:44.308 [2024-07-25 10:05:23.189271] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:44.308 [2024-07-25 10:05:23.189274] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.308 [2024-07-25 10:05:23.189286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.198211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.198224] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:44.308 [2024-07-25 10:05:23.198229] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:44.308 [2024-07-25 10:05:23.198233] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:44.308 [2024-07-25 10:05:23.198238] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:44.308 [2024-07-25 10:05:23.198242] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:44.308 [2024-07-25 10:05:23.198247] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:44.308 [2024-07-25 10:05:23.198251] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.198259] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.198272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.206211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.206228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.308 [2024-07-25 10:05:23.206237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.308 [2024-07-25 10:05:23.206247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.308 [2024-07-25 10:05:23.206255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:44.308 [2024-07-25 10:05:23.206260] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.206268] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.206277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.214206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.214216] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:44.308 [2024-07-25 10:05:23.214221] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.214230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.214236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.214244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.222207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.222273] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.222282] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.222290] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:44.308 [2024-07-25 10:05:23.222295] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:44.308 [2024-07-25 10:05:23.222299] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.308 [2024-07-25 10:05:23.222305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.230209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.230221] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:44.308 [2024-07-25 10:05:23.230233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.230241] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.230248] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:44.308 [2024-07-25 10:05:23.230252] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:44.308 [2024-07-25 10:05:23.230256] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.308 [2024-07-25 10:05:23.230262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.238209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.238228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.238236] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.238243] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:44.308 [2024-07-25 10:05:23.238248] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:44.308 [2024-07-25 10:05:23.238251] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.308 [2024-07-25 10:05:23.238258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.246207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.246217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246224] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246232] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246244] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246249] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246253] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:44.308 [2024-07-25 10:05:23.246258] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:44.308 [2024-07-25 10:05:23.246263] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:44.308 [2024-07-25 10:05:23.246278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.254207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.254222] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.262208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.262223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.270207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.270221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:44.308 [2024-07-25 10:05:23.278208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:44.308 [2024-07-25 10:05:23.278227] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:44.308 [2024-07-25 10:05:23.278232] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:44.308 [2024-07-25 10:05:23.278236] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:44.308 [2024-07-25 10:05:23.278239] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:44.308 [2024-07-25 10:05:23.278243] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:44.308 [2024-07-25 10:05:23.278249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:44.308 [2024-07-25 10:05:23.278257] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:44.309 [2024-07-25 10:05:23.278261] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:44.309 [2024-07-25 10:05:23.278264] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.309 [2024-07-25 10:05:23.278270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:44.309 [2024-07-25 10:05:23.278278] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:44.309 [2024-07-25 10:05:23.278282] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:44.309 [2024-07-25 10:05:23.278285] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.309 [2024-07-25 10:05:23.278291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:44.309 [2024-07-25 10:05:23.278298] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:44.309 [2024-07-25 10:05:23.278303] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:44.309 [2024-07-25 10:05:23.278306] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:44.309 [2024-07-25 10:05:23.278312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:44.309 [2024-07-25 10:05:23.286209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:44.309 [2024-07-25 10:05:23.286225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:44.309 [2024-07-25 10:05:23.286235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:44.309 [2024-07-25 10:05:23.286242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:44.309 ===================================================== 00:16:44.309 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:44.309 ===================================================== 00:16:44.309 Controller Capabilities/Features 00:16:44.309 ================================ 00:16:44.309 Vendor ID: 4e58 00:16:44.309 Subsystem Vendor ID: 4e58 00:16:44.309 Serial Number: SPDK2 00:16:44.309 Model Number: SPDK bdev Controller 00:16:44.309 Firmware Version: 24.09 00:16:44.309 Recommended Arb Burst: 6 00:16:44.309 IEEE OUI Identifier: 8d 6b 50 00:16:44.309 Multi-path I/O 00:16:44.309 May have multiple subsystem ports: Yes 00:16:44.309 May have multiple controllers: Yes 00:16:44.309 Associated with SR-IOV VF: No 00:16:44.309 Max Data Transfer Size: 131072 00:16:44.309 Max Number of Namespaces: 32 00:16:44.309 Max Number of I/O Queues: 127 00:16:44.309 NVMe Specification Version (VS): 1.3 00:16:44.309 NVMe Specification Version (Identify): 1.3 00:16:44.309 Maximum Queue Entries: 256 00:16:44.309 Contiguous Queues Required: Yes 00:16:44.309 Arbitration Mechanisms Supported 00:16:44.309 Weighted Round Robin: Not Supported 00:16:44.309 Vendor Specific: Not Supported 00:16:44.309 Reset Timeout: 15000 ms 00:16:44.309 Doorbell Stride: 4 bytes 00:16:44.309 NVM Subsystem Reset: Not Supported 00:16:44.309 Command Sets Supported 00:16:44.309 NVM Command Set: Supported 00:16:44.309 Boot Partition: Not Supported 00:16:44.309 Memory Page Size Minimum: 4096 bytes 00:16:44.309 Memory Page Size Maximum: 4096 bytes 00:16:44.309 Persistent Memory Region: Not Supported 00:16:44.309 Optional Asynchronous Events Supported 00:16:44.309 Namespace Attribute Notices: Supported 00:16:44.309 Firmware Activation Notices: Not Supported 00:16:44.309 ANA Change Notices: Not Supported 00:16:44.309 PLE Aggregate Log Change Notices: Not Supported 00:16:44.309 LBA Status Info Alert Notices: Not Supported 00:16:44.309 EGE Aggregate Log Change Notices: Not Supported 00:16:44.309 Normal NVM Subsystem Shutdown event: Not Supported 00:16:44.309 Zone Descriptor Change Notices: Not Supported 00:16:44.309 Discovery Log Change Notices: Not Supported 00:16:44.309 Controller Attributes 00:16:44.309 128-bit Host Identifier: Supported 00:16:44.309 Non-Operational Permissive Mode: Not Supported 00:16:44.309 NVM Sets: Not Supported 00:16:44.309 Read Recovery Levels: Not Supported 00:16:44.309 Endurance Groups: Not Supported 00:16:44.309 Predictable Latency Mode: Not Supported 00:16:44.309 Traffic Based Keep ALive: Not Supported 00:16:44.309 Namespace Granularity: Not Supported 00:16:44.309 SQ Associations: Not Supported 00:16:44.309 UUID List: Not Supported 00:16:44.309 Multi-Domain Subsystem: Not Supported 00:16:44.309 Fixed Capacity Management: Not Supported 00:16:44.309 Variable Capacity Management: Not Supported 00:16:44.309 Delete Endurance Group: Not Supported 00:16:44.309 Delete NVM Set: Not Supported 00:16:44.309 Extended LBA Formats Supported: Not Supported 00:16:44.309 Flexible Data Placement Supported: Not Supported 00:16:44.309 00:16:44.309 Controller Memory Buffer Support 00:16:44.309 ================================ 00:16:44.309 Supported: No 00:16:44.309 00:16:44.309 Persistent Memory Region Support 00:16:44.309 ================================ 00:16:44.309 Supported: No 00:16:44.309 00:16:44.309 Admin Command Set Attributes 00:16:44.309 ============================ 00:16:44.309 Security Send/Receive: Not Supported 00:16:44.309 Format NVM: Not Supported 00:16:44.309 Firmware Activate/Download: Not Supported 00:16:44.309 Namespace Management: Not Supported 00:16:44.309 Device Self-Test: Not Supported 00:16:44.309 Directives: Not Supported 00:16:44.309 NVMe-MI: Not Supported 00:16:44.309 Virtualization Management: Not Supported 00:16:44.309 Doorbell Buffer Config: Not Supported 00:16:44.309 Get LBA Status Capability: Not Supported 00:16:44.309 Command & Feature Lockdown Capability: Not Supported 00:16:44.309 Abort Command Limit: 4 00:16:44.309 Async Event Request Limit: 4 00:16:44.309 Number of Firmware Slots: N/A 00:16:44.309 Firmware Slot 1 Read-Only: N/A 00:16:44.309 Firmware Activation Without Reset: N/A 00:16:44.309 Multiple Update Detection Support: N/A 00:16:44.309 Firmware Update Granularity: No Information Provided 00:16:44.309 Per-Namespace SMART Log: No 00:16:44.309 Asymmetric Namespace Access Log Page: Not Supported 00:16:44.309 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:44.309 Command Effects Log Page: Supported 00:16:44.309 Get Log Page Extended Data: Supported 00:16:44.309 Telemetry Log Pages: Not Supported 00:16:44.309 Persistent Event Log Pages: Not Supported 00:16:44.309 Supported Log Pages Log Page: May Support 00:16:44.309 Commands Supported & Effects Log Page: Not Supported 00:16:44.309 Feature Identifiers & Effects Log Page:May Support 00:16:44.309 NVMe-MI Commands & Effects Log Page: May Support 00:16:44.309 Data Area 4 for Telemetry Log: Not Supported 00:16:44.309 Error Log Page Entries Supported: 128 00:16:44.309 Keep Alive: Supported 00:16:44.309 Keep Alive Granularity: 10000 ms 00:16:44.309 00:16:44.309 NVM Command Set Attributes 00:16:44.309 ========================== 00:16:44.309 Submission Queue Entry Size 00:16:44.309 Max: 64 00:16:44.309 Min: 64 00:16:44.309 Completion Queue Entry Size 00:16:44.309 Max: 16 00:16:44.309 Min: 16 00:16:44.309 Number of Namespaces: 32 00:16:44.309 Compare Command: Supported 00:16:44.309 Write Uncorrectable Command: Not Supported 00:16:44.309 Dataset Management Command: Supported 00:16:44.309 Write Zeroes Command: Supported 00:16:44.309 Set Features Save Field: Not Supported 00:16:44.309 Reservations: Not Supported 00:16:44.309 Timestamp: Not Supported 00:16:44.309 Copy: Supported 00:16:44.309 Volatile Write Cache: Present 00:16:44.309 Atomic Write Unit (Normal): 1 00:16:44.309 Atomic Write Unit (PFail): 1 00:16:44.309 Atomic Compare & Write Unit: 1 00:16:44.309 Fused Compare & Write: Supported 00:16:44.309 Scatter-Gather List 00:16:44.309 SGL Command Set: Supported (Dword aligned) 00:16:44.309 SGL Keyed: Not Supported 00:16:44.309 SGL Bit Bucket Descriptor: Not Supported 00:16:44.309 SGL Metadata Pointer: Not Supported 00:16:44.309 Oversized SGL: Not Supported 00:16:44.309 SGL Metadata Address: Not Supported 00:16:44.309 SGL Offset: Not Supported 00:16:44.309 Transport SGL Data Block: Not Supported 00:16:44.309 Replay Protected Memory Block: Not Supported 00:16:44.309 00:16:44.309 Firmware Slot Information 00:16:44.309 ========================= 00:16:44.309 Active slot: 1 00:16:44.309 Slot 1 Firmware Revision: 24.09 00:16:44.309 00:16:44.309 00:16:44.309 Commands Supported and Effects 00:16:44.309 ============================== 00:16:44.309 Admin Commands 00:16:44.309 -------------- 00:16:44.309 Get Log Page (02h): Supported 00:16:44.309 Identify (06h): Supported 00:16:44.309 Abort (08h): Supported 00:16:44.309 Set Features (09h): Supported 00:16:44.309 Get Features (0Ah): Supported 00:16:44.310 Asynchronous Event Request (0Ch): Supported 00:16:44.310 Keep Alive (18h): Supported 00:16:44.310 I/O Commands 00:16:44.310 ------------ 00:16:44.310 Flush (00h): Supported LBA-Change 00:16:44.310 Write (01h): Supported LBA-Change 00:16:44.310 Read (02h): Supported 00:16:44.310 Compare (05h): Supported 00:16:44.310 Write Zeroes (08h): Supported LBA-Change 00:16:44.310 Dataset Management (09h): Supported LBA-Change 00:16:44.310 Copy (19h): Supported LBA-Change 00:16:44.310 00:16:44.310 Error Log 00:16:44.310 ========= 00:16:44.310 00:16:44.310 Arbitration 00:16:44.310 =========== 00:16:44.310 Arbitration Burst: 1 00:16:44.310 00:16:44.310 Power Management 00:16:44.310 ================ 00:16:44.310 Number of Power States: 1 00:16:44.310 Current Power State: Power State #0 00:16:44.310 Power State #0: 00:16:44.310 Max Power: 0.00 W 00:16:44.310 Non-Operational State: Operational 00:16:44.310 Entry Latency: Not Reported 00:16:44.310 Exit Latency: Not Reported 00:16:44.310 Relative Read Throughput: 0 00:16:44.310 Relative Read Latency: 0 00:16:44.310 Relative Write Throughput: 0 00:16:44.310 Relative Write Latency: 0 00:16:44.310 Idle Power: Not Reported 00:16:44.310 Active Power: Not Reported 00:16:44.310 Non-Operational Permissive Mode: Not Supported 00:16:44.310 00:16:44.310 Health Information 00:16:44.310 ================== 00:16:44.310 Critical Warnings: 00:16:44.310 Available Spare Space: OK 00:16:44.310 Temperature: OK 00:16:44.310 Device Reliability: OK 00:16:44.310 Read Only: No 00:16:44.310 Volatile Memory Backup: OK 00:16:44.310 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:44.310 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:44.310 Available Spare: 0% 00:16:44.310 Available Sp[2024-07-25 10:05:23.286341] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:44.310 [2024-07-25 10:05:23.294206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:44.310 [2024-07-25 10:05:23.294238] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:44.310 [2024-07-25 10:05:23.294247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.310 [2024-07-25 10:05:23.294254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.310 [2024-07-25 10:05:23.294260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.310 [2024-07-25 10:05:23.294266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:44.310 [2024-07-25 10:05:23.294323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:44.310 [2024-07-25 10:05:23.294335] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:44.310 [2024-07-25 10:05:23.295326] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:44.310 [2024-07-25 10:05:23.295374] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:44.310 [2024-07-25 10:05:23.295381] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:44.310 [2024-07-25 10:05:23.296328] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:44.310 [2024-07-25 10:05:23.296341] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:44.310 [2024-07-25 10:05:23.296387] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:44.310 [2024-07-25 10:05:23.297772] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:44.310 are Threshold: 0% 00:16:44.310 Life Percentage Used: 0% 00:16:44.310 Data Units Read: 0 00:16:44.310 Data Units Written: 0 00:16:44.310 Host Read Commands: 0 00:16:44.310 Host Write Commands: 0 00:16:44.310 Controller Busy Time: 0 minutes 00:16:44.310 Power Cycles: 0 00:16:44.310 Power On Hours: 0 hours 00:16:44.310 Unsafe Shutdowns: 0 00:16:44.310 Unrecoverable Media Errors: 0 00:16:44.310 Lifetime Error Log Entries: 0 00:16:44.310 Warning Temperature Time: 0 minutes 00:16:44.310 Critical Temperature Time: 0 minutes 00:16:44.310 00:16:44.310 Number of Queues 00:16:44.310 ================ 00:16:44.310 Number of I/O Submission Queues: 127 00:16:44.310 Number of I/O Completion Queues: 127 00:16:44.310 00:16:44.310 Active Namespaces 00:16:44.310 ================= 00:16:44.310 Namespace ID:1 00:16:44.310 Error Recovery Timeout: Unlimited 00:16:44.310 Command Set Identifier: NVM (00h) 00:16:44.310 Deallocate: Supported 00:16:44.310 Deallocated/Unwritten Error: Not Supported 00:16:44.310 Deallocated Read Value: Unknown 00:16:44.310 Deallocate in Write Zeroes: Not Supported 00:16:44.310 Deallocated Guard Field: 0xFFFF 00:16:44.310 Flush: Supported 00:16:44.310 Reservation: Supported 00:16:44.310 Namespace Sharing Capabilities: Multiple Controllers 00:16:44.310 Size (in LBAs): 131072 (0GiB) 00:16:44.310 Capacity (in LBAs): 131072 (0GiB) 00:16:44.310 Utilization (in LBAs): 131072 (0GiB) 00:16:44.310 NGUID: 7AB2BC3277D64B51A625F741A85F6177 00:16:44.310 UUID: 7ab2bc32-77d6-4b51-a625-f741a85f6177 00:16:44.310 Thin Provisioning: Not Supported 00:16:44.310 Per-NS Atomic Units: Yes 00:16:44.310 Atomic Boundary Size (Normal): 0 00:16:44.310 Atomic Boundary Size (PFail): 0 00:16:44.310 Atomic Boundary Offset: 0 00:16:44.310 Maximum Single Source Range Length: 65535 00:16:44.310 Maximum Copy Length: 65535 00:16:44.310 Maximum Source Range Count: 1 00:16:44.310 NGUID/EUI64 Never Reused: No 00:16:44.310 Namespace Write Protected: No 00:16:44.310 Number of LBA Formats: 1 00:16:44.310 Current LBA Format: LBA Format #00 00:16:44.310 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:44.310 00:16:44.310 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:44.310 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.571 [2024-07-25 10:05:23.482225] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.862 Initializing NVMe Controllers 00:16:49.862 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:49.862 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:49.862 Initialization complete. Launching workers. 00:16:49.862 ======================================================== 00:16:49.862 Latency(us) 00:16:49.862 Device Information : IOPS MiB/s Average min max 00:16:49.862 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39984.00 156.19 3203.66 839.51 6811.45 00:16:49.862 ======================================================== 00:16:49.862 Total : 39984.00 156.19 3203.66 839.51 6811.45 00:16:49.862 00:16:49.862 [2024-07-25 10:05:28.590388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.862 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:49.862 EAL: No free 2048 kB hugepages reported on node 1 00:16:49.862 [2024-07-25 10:05:28.772952] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:55.155 Initializing NVMe Controllers 00:16:55.155 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:55.155 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:55.155 Initialization complete. Launching workers. 00:16:55.155 ======================================================== 00:16:55.155 Latency(us) 00:16:55.155 Device Information : IOPS MiB/s Average min max 00:16:55.155 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35354.19 138.10 3620.64 1107.31 10653.89 00:16:55.155 ======================================================== 00:16:55.155 Total : 35354.19 138.10 3620.64 1107.31 10653.89 00:16:55.155 00:16:55.155 [2024-07-25 10:05:33.797261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.155 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:55.155 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.155 [2024-07-25 10:05:33.986593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:00.503 [2024-07-25 10:05:39.126302] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:00.503 Initializing NVMe Controllers 00:17:00.503 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:00.503 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:00.503 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:00.503 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:00.503 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:00.503 Initialization complete. Launching workers. 00:17:00.503 Starting thread on core 2 00:17:00.503 Starting thread on core 3 00:17:00.503 Starting thread on core 1 00:17:00.503 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:00.503 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.503 [2024-07-25 10:05:39.374560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.807 [2024-07-25 10:05:42.425624] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.807 Initializing NVMe Controllers 00:17:03.807 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.807 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.807 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:03.807 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:03.807 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:03.807 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:03.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:03.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:03.807 Initialization complete. Launching workers. 00:17:03.807 Starting thread on core 1 with urgent priority queue 00:17:03.807 Starting thread on core 2 with urgent priority queue 00:17:03.807 Starting thread on core 3 with urgent priority queue 00:17:03.807 Starting thread on core 0 with urgent priority queue 00:17:03.807 SPDK bdev Controller (SPDK2 ) core 0: 9871.33 IO/s 10.13 secs/100000 ios 00:17:03.807 SPDK bdev Controller (SPDK2 ) core 1: 9507.00 IO/s 10.52 secs/100000 ios 00:17:03.807 SPDK bdev Controller (SPDK2 ) core 2: 16713.33 IO/s 5.98 secs/100000 ios 00:17:03.807 SPDK bdev Controller (SPDK2 ) core 3: 10960.67 IO/s 9.12 secs/100000 ios 00:17:03.807 ======================================================== 00:17:03.807 00:17:03.807 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:03.807 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.807 [2024-07-25 10:05:42.693622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.807 Initializing NVMe Controllers 00:17:03.807 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.807 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.807 Namespace ID: 1 size: 0GB 00:17:03.807 Initialization complete. 00:17:03.807 INFO: using host memory buffer for IO 00:17:03.807 Hello world! 00:17:03.807 [2024-07-25 10:05:42.703687] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.807 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:03.807 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.068 [2024-07-25 10:05:42.958569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:05.010 Initializing NVMe Controllers 00:17:05.010 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.010 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.010 Initialization complete. Launching workers. 00:17:05.010 submit (in ns) avg, min, max = 8677.4, 3925.8, 4007784.2 00:17:05.010 complete (in ns) avg, min, max = 17747.0, 2369.2, 4110860.0 00:17:05.010 00:17:05.010 Submit histogram 00:17:05.010 ================ 00:17:05.010 Range in us Cumulative Count 00:17:05.010 3.920 - 3.947: 1.5310% ( 293) 00:17:05.010 3.947 - 3.973: 7.6027% ( 1162) 00:17:05.010 3.973 - 4.000: 18.0270% ( 1995) 00:17:05.010 4.000 - 4.027: 28.5558% ( 2015) 00:17:05.010 4.027 - 4.053: 38.5359% ( 1910) 00:17:05.010 4.053 - 4.080: 49.5872% ( 2115) 00:17:05.010 4.080 - 4.107: 65.7279% ( 3089) 00:17:05.010 4.107 - 4.133: 80.3375% ( 2796) 00:17:05.010 4.133 - 4.160: 92.0577% ( 2243) 00:17:05.010 4.160 - 4.187: 97.0007% ( 946) 00:17:05.010 4.187 - 4.213: 98.5578% ( 298) 00:17:05.010 4.213 - 4.240: 99.1483% ( 113) 00:17:05.010 4.240 - 4.267: 99.3730% ( 43) 00:17:05.010 4.267 - 4.293: 99.4409% ( 13) 00:17:05.010 4.293 - 4.320: 99.4723% ( 6) 00:17:05.010 4.320 - 4.347: 99.4827% ( 2) 00:17:05.010 4.347 - 4.373: 99.4879% ( 1) 00:17:05.010 4.400 - 4.427: 99.4932% ( 1) 00:17:05.010 4.587 - 4.613: 99.4984% ( 1) 00:17:05.010 4.773 - 4.800: 99.5036% ( 1) 00:17:05.010 5.013 - 5.040: 99.5088% ( 1) 00:17:05.010 5.093 - 5.120: 99.5141% ( 1) 00:17:05.010 5.120 - 5.147: 99.5193% ( 1) 00:17:05.010 5.173 - 5.200: 99.5245% ( 1) 00:17:05.010 5.307 - 5.333: 99.5297% ( 1) 00:17:05.010 5.467 - 5.493: 99.5350% ( 1) 00:17:05.010 5.653 - 5.680: 99.5402% ( 1) 00:17:05.010 5.813 - 5.840: 99.5506% ( 2) 00:17:05.010 6.133 - 6.160: 99.5559% ( 1) 00:17:05.010 6.160 - 6.187: 99.5611% ( 1) 00:17:05.010 6.187 - 6.213: 99.5663% ( 1) 00:17:05.010 6.213 - 6.240: 99.5768% ( 2) 00:17:05.010 6.240 - 6.267: 99.5820% ( 1) 00:17:05.010 6.293 - 6.320: 99.5872% ( 1) 00:17:05.010 6.320 - 6.347: 99.6029% ( 3) 00:17:05.010 6.347 - 6.373: 99.6081% ( 1) 00:17:05.010 6.373 - 6.400: 99.6133% ( 1) 00:17:05.010 6.400 - 6.427: 99.6186% ( 1) 00:17:05.010 6.427 - 6.453: 99.6238% ( 1) 00:17:05.010 6.453 - 6.480: 99.6290% ( 1) 00:17:05.010 6.480 - 6.507: 99.6395% ( 2) 00:17:05.010 6.507 - 6.533: 99.6447% ( 1) 00:17:05.010 6.587 - 6.613: 99.6499% ( 1) 00:17:05.010 6.640 - 6.667: 99.6551% ( 1) 00:17:05.010 6.693 - 6.720: 99.6604% ( 1) 00:17:05.010 6.800 - 6.827: 99.6656% ( 1) 00:17:05.010 6.827 - 6.880: 99.6760% ( 2) 00:17:05.010 6.933 - 6.987: 99.6813% ( 1) 00:17:05.010 7.040 - 7.093: 99.6865% ( 1) 00:17:05.010 7.147 - 7.200: 99.6969% ( 2) 00:17:05.010 7.200 - 7.253: 99.7074% ( 2) 00:17:05.010 7.307 - 7.360: 99.7178% ( 2) 00:17:05.010 7.413 - 7.467: 99.7335% ( 3) 00:17:05.010 7.467 - 7.520: 99.7387% ( 1) 00:17:05.010 7.573 - 7.627: 99.7440% ( 1) 00:17:05.010 7.680 - 7.733: 99.7492% ( 1) 00:17:05.010 7.733 - 7.787: 99.7544% ( 1) 00:17:05.010 7.787 - 7.840: 99.7596% ( 1) 00:17:05.010 7.840 - 7.893: 99.7649% ( 1) 00:17:05.010 7.893 - 7.947: 99.7753% ( 2) 00:17:05.010 7.947 - 8.000: 99.7910% ( 3) 00:17:05.010 8.000 - 8.053: 99.7962% ( 1) 00:17:05.010 8.160 - 8.213: 99.8014% ( 1) 00:17:05.010 8.320 - 8.373: 99.8119% ( 2) 00:17:05.010 8.373 - 8.427: 99.8223% ( 2) 00:17:05.011 8.427 - 8.480: 99.8276% ( 1) 00:17:05.011 8.587 - 8.640: 99.8328% ( 1) 00:17:05.011 8.693 - 8.747: 99.8432% ( 2) 00:17:05.011 8.747 - 8.800: 99.8485% ( 1) 00:17:05.011 8.800 - 8.853: 99.8589% ( 2) 00:17:05.011 8.960 - 9.013: 99.8641% ( 1) 00:17:05.011 9.067 - 9.120: 99.8694% ( 1) 00:17:05.011 9.493 - 9.547: 99.8746% ( 1) 00:17:05.011 9.653 - 9.707: 99.8798% ( 1) 00:17:05.011 9.813 - 9.867: 99.8850% ( 1) 00:17:05.011 3986.773 - 4014.080: 100.0000% ( 22) 00:17:05.011 00:17:05.011 Complete histogram 00:17:05.011 ================== 00:17:05.011 Range in us Cumulative Count 00:17:05.011 2.360 - 2.373: 0.0105% ( 2) 00:17:05.011 2.373 - [2024-07-25 10:05:44.054857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:05.011 2.387: 0.0470% ( 7) 00:17:05.011 2.387 - 2.400: 1.0712% ( 196) 00:17:05.011 2.400 - 2.413: 1.1652% ( 18) 00:17:05.011 2.413 - 2.427: 1.2697% ( 20) 00:17:05.011 2.427 - 2.440: 1.3115% ( 8) 00:17:05.011 2.440 - 2.453: 8.0416% ( 1288) 00:17:05.011 2.453 - 2.467: 48.4481% ( 7733) 00:17:05.011 2.467 - 2.480: 58.3499% ( 1895) 00:17:05.011 2.480 - 2.493: 75.5931% ( 3300) 00:17:05.011 2.493 - 2.507: 80.5413% ( 947) 00:17:05.011 2.507 - 2.520: 82.4120% ( 358) 00:17:05.011 2.520 - 2.533: 86.5869% ( 799) 00:17:05.011 2.533 - 2.547: 92.0159% ( 1039) 00:17:05.011 2.547 - 2.560: 95.5795% ( 682) 00:17:05.011 2.560 - 2.573: 97.8995% ( 444) 00:17:05.011 2.573 - 2.587: 99.0020% ( 211) 00:17:05.011 2.587 - 2.600: 99.2789% ( 53) 00:17:05.011 2.600 - 2.613: 99.3364% ( 11) 00:17:05.011 2.613 - 2.627: 99.3573% ( 4) 00:17:05.011 4.427 - 4.453: 99.3625% ( 1) 00:17:05.011 4.453 - 4.480: 99.3678% ( 1) 00:17:05.011 4.587 - 4.613: 99.3834% ( 3) 00:17:05.011 4.613 - 4.640: 99.3939% ( 2) 00:17:05.011 4.640 - 4.667: 99.4043% ( 2) 00:17:05.011 4.693 - 4.720: 99.4096% ( 1) 00:17:05.011 4.800 - 4.827: 99.4148% ( 1) 00:17:05.011 4.880 - 4.907: 99.4200% ( 1) 00:17:05.011 4.933 - 4.960: 99.4252% ( 1) 00:17:05.011 5.120 - 5.147: 99.4305% ( 1) 00:17:05.011 5.253 - 5.280: 99.4357% ( 1) 00:17:05.011 5.467 - 5.493: 99.4409% ( 1) 00:17:05.011 5.493 - 5.520: 99.4514% ( 2) 00:17:05.011 5.733 - 5.760: 99.4618% ( 2) 00:17:05.011 5.760 - 5.787: 99.4670% ( 1) 00:17:05.011 5.840 - 5.867: 99.4723% ( 1) 00:17:05.011 5.893 - 5.920: 99.4827% ( 2) 00:17:05.011 5.920 - 5.947: 99.4932% ( 2) 00:17:05.011 6.080 - 6.107: 99.5036% ( 2) 00:17:05.011 6.133 - 6.160: 99.5193% ( 3) 00:17:05.011 6.187 - 6.213: 99.5245% ( 1) 00:17:05.011 6.347 - 6.373: 99.5297% ( 1) 00:17:05.011 6.373 - 6.400: 99.5350% ( 1) 00:17:05.011 6.453 - 6.480: 99.5454% ( 2) 00:17:05.011 6.560 - 6.587: 99.5559% ( 2) 00:17:05.011 6.693 - 6.720: 99.5611% ( 1) 00:17:05.011 6.827 - 6.880: 99.5663% ( 1) 00:17:05.011 6.933 - 6.987: 99.5715% ( 1) 00:17:05.011 6.987 - 7.040: 99.5768% ( 1) 00:17:05.011 7.147 - 7.200: 99.5820% ( 1) 00:17:05.011 7.253 - 7.307: 99.5872% ( 1) 00:17:05.011 7.307 - 7.360: 99.5924% ( 1) 00:17:05.011 7.520 - 7.573: 99.5977% ( 1) 00:17:05.011 7.787 - 7.840: 99.6029% ( 1) 00:17:05.011 7.893 - 7.947: 99.6081% ( 1) 00:17:05.011 8.427 - 8.480: 99.6133% ( 1) 00:17:05.011 43.093 - 43.307: 99.6186% ( 1) 00:17:05.011 3986.773 - 4014.080: 99.9843% ( 70) 00:17:05.011 4041.387 - 4068.693: 99.9895% ( 1) 00:17:05.011 4068.693 - 4096.000: 99.9948% ( 1) 00:17:05.011 4096.000 - 4123.307: 100.0000% ( 1) 00:17:05.011 00:17:05.011 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:05.011 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:05.011 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:05.011 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:05.011 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:05.272 [ 00:17:05.272 { 00:17:05.272 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:05.272 "subtype": "Discovery", 00:17:05.272 "listen_addresses": [], 00:17:05.272 "allow_any_host": true, 00:17:05.272 "hosts": [] 00:17:05.272 }, 00:17:05.272 { 00:17:05.272 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:05.272 "subtype": "NVMe", 00:17:05.272 "listen_addresses": [ 00:17:05.272 { 00:17:05.272 "trtype": "VFIOUSER", 00:17:05.272 "adrfam": "IPv4", 00:17:05.272 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:05.272 "trsvcid": "0" 00:17:05.272 } 00:17:05.272 ], 00:17:05.272 "allow_any_host": true, 00:17:05.272 "hosts": [], 00:17:05.272 "serial_number": "SPDK1", 00:17:05.272 "model_number": "SPDK bdev Controller", 00:17:05.272 "max_namespaces": 32, 00:17:05.272 "min_cntlid": 1, 00:17:05.272 "max_cntlid": 65519, 00:17:05.272 "namespaces": [ 00:17:05.272 { 00:17:05.272 "nsid": 1, 00:17:05.272 "bdev_name": "Malloc1", 00:17:05.272 "name": "Malloc1", 00:17:05.272 "nguid": "BD29F8108AC347FD8B3622B9BC9197DD", 00:17:05.272 "uuid": "bd29f810-8ac3-47fd-8b36-22b9bc9197dd" 00:17:05.272 }, 00:17:05.272 { 00:17:05.272 "nsid": 2, 00:17:05.272 "bdev_name": "Malloc3", 00:17:05.272 "name": "Malloc3", 00:17:05.272 "nguid": "2D8CA347007B44128FC1469A270043F6", 00:17:05.273 "uuid": "2d8ca347-007b-4412-8fc1-469a270043f6" 00:17:05.273 } 00:17:05.273 ] 00:17:05.273 }, 00:17:05.273 { 00:17:05.273 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:05.273 "subtype": "NVMe", 00:17:05.273 "listen_addresses": [ 00:17:05.273 { 00:17:05.273 "trtype": "VFIOUSER", 00:17:05.273 "adrfam": "IPv4", 00:17:05.273 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:05.273 "trsvcid": "0" 00:17:05.273 } 00:17:05.273 ], 00:17:05.273 "allow_any_host": true, 00:17:05.273 "hosts": [], 00:17:05.273 "serial_number": "SPDK2", 00:17:05.273 "model_number": "SPDK bdev Controller", 00:17:05.273 "max_namespaces": 32, 00:17:05.273 "min_cntlid": 1, 00:17:05.273 "max_cntlid": 65519, 00:17:05.273 "namespaces": [ 00:17:05.273 { 00:17:05.273 "nsid": 1, 00:17:05.273 "bdev_name": "Malloc2", 00:17:05.273 "name": "Malloc2", 00:17:05.273 "nguid": "7AB2BC3277D64B51A625F741A85F6177", 00:17:05.273 "uuid": "7ab2bc32-77d6-4b51-a625-f741a85f6177" 00:17:05.273 } 00:17:05.273 ] 00:17:05.273 } 00:17:05.273 ] 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1271051 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:05.273 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:05.273 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.535 [2024-07-25 10:05:44.436602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:05.535 Malloc4 00:17:05.535 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:05.535 [2024-07-25 10:05:44.601698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:05.535 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:05.535 Asynchronous Event Request test 00:17:05.535 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.535 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:05.535 Registering asynchronous event callbacks... 00:17:05.535 Starting namespace attribute notice tests for all controllers... 00:17:05.535 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:05.535 aer_cb - Changed Namespace 00:17:05.535 Cleaning up... 00:17:05.796 [ 00:17:05.796 { 00:17:05.796 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:05.796 "subtype": "Discovery", 00:17:05.796 "listen_addresses": [], 00:17:05.796 "allow_any_host": true, 00:17:05.796 "hosts": [] 00:17:05.796 }, 00:17:05.796 { 00:17:05.796 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:05.796 "subtype": "NVMe", 00:17:05.796 "listen_addresses": [ 00:17:05.796 { 00:17:05.796 "trtype": "VFIOUSER", 00:17:05.796 "adrfam": "IPv4", 00:17:05.796 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:05.796 "trsvcid": "0" 00:17:05.796 } 00:17:05.796 ], 00:17:05.796 "allow_any_host": true, 00:17:05.796 "hosts": [], 00:17:05.796 "serial_number": "SPDK1", 00:17:05.796 "model_number": "SPDK bdev Controller", 00:17:05.796 "max_namespaces": 32, 00:17:05.796 "min_cntlid": 1, 00:17:05.796 "max_cntlid": 65519, 00:17:05.796 "namespaces": [ 00:17:05.796 { 00:17:05.796 "nsid": 1, 00:17:05.796 "bdev_name": "Malloc1", 00:17:05.796 "name": "Malloc1", 00:17:05.796 "nguid": "BD29F8108AC347FD8B3622B9BC9197DD", 00:17:05.796 "uuid": "bd29f810-8ac3-47fd-8b36-22b9bc9197dd" 00:17:05.796 }, 00:17:05.796 { 00:17:05.796 "nsid": 2, 00:17:05.796 "bdev_name": "Malloc3", 00:17:05.796 "name": "Malloc3", 00:17:05.796 "nguid": "2D8CA347007B44128FC1469A270043F6", 00:17:05.796 "uuid": "2d8ca347-007b-4412-8fc1-469a270043f6" 00:17:05.796 } 00:17:05.796 ] 00:17:05.796 }, 00:17:05.796 { 00:17:05.796 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:05.796 "subtype": "NVMe", 00:17:05.796 "listen_addresses": [ 00:17:05.796 { 00:17:05.796 "trtype": "VFIOUSER", 00:17:05.796 "adrfam": "IPv4", 00:17:05.796 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:05.796 "trsvcid": "0" 00:17:05.796 } 00:17:05.796 ], 00:17:05.796 "allow_any_host": true, 00:17:05.796 "hosts": [], 00:17:05.796 "serial_number": "SPDK2", 00:17:05.796 "model_number": "SPDK bdev Controller", 00:17:05.796 "max_namespaces": 32, 00:17:05.796 "min_cntlid": 1, 00:17:05.796 "max_cntlid": 65519, 00:17:05.796 "namespaces": [ 00:17:05.796 { 00:17:05.796 "nsid": 1, 00:17:05.796 "bdev_name": "Malloc2", 00:17:05.796 "name": "Malloc2", 00:17:05.797 "nguid": "7AB2BC3277D64B51A625F741A85F6177", 00:17:05.797 "uuid": "7ab2bc32-77d6-4b51-a625-f741a85f6177" 00:17:05.797 }, 00:17:05.797 { 00:17:05.797 "nsid": 2, 00:17:05.797 "bdev_name": "Malloc4", 00:17:05.797 "name": "Malloc4", 00:17:05.797 "nguid": "ABA0C1EE3B514160AEE21E6DD48E3E83", 00:17:05.797 "uuid": "aba0c1ee-3b51-4160-aee2-1e6dd48e3e83" 00:17:05.797 } 00:17:05.797 ] 00:17:05.797 } 00:17:05.797 ] 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1271051 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1262075 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1262075 ']' 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1262075 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1262075 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1262075' 00:17:05.797 killing process with pid 1262075 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1262075 00:17:05.797 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1262075 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1271315 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1271315' 00:17:06.058 Process pid: 1271315 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1271315 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1271315 ']' 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.058 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:06.058 [2024-07-25 10:05:45.077601] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:06.058 [2024-07-25 10:05:45.078552] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:06.059 [2024-07-25 10:05:45.078599] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.059 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.059 [2024-07-25 10:05:45.139707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.320 [2024-07-25 10:05:45.205284] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.320 [2024-07-25 10:05:45.205323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.320 [2024-07-25 10:05:45.205331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.320 [2024-07-25 10:05:45.205337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.320 [2024-07-25 10:05:45.205343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.320 [2024-07-25 10:05:45.205505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.320 [2024-07-25 10:05:45.205639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.320 [2024-07-25 10:05:45.205799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.320 [2024-07-25 10:05:45.205800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.320 [2024-07-25 10:05:45.273028] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:06.320 [2024-07-25 10:05:45.273054] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:06.320 [2024-07-25 10:05:45.274087] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:06.320 [2024-07-25 10:05:45.274737] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:06.320 [2024-07-25 10:05:45.274814] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:06.892 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:06.893 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:17:06.893 10:05:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:07.836 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:08.098 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:08.098 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:08.098 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.098 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:08.098 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:08.098 Malloc1 00:17:08.098 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:08.359 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:08.621 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:08.621 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:08.621 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:08.621 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:08.883 Malloc2 00:17:08.883 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:09.143 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:09.143 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1271315 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1271315 ']' 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1271315 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1271315 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1271315' 00:17:09.404 killing process with pid 1271315 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1271315 00:17:09.404 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1271315 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:09.667 00:17:09.667 real 0m50.490s 00:17:09.667 user 3m20.116s 00:17:09.667 sys 0m2.978s 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:09.667 ************************************ 00:17:09.667 END TEST nvmf_vfio_user 00:17:09.667 ************************************ 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:09.667 ************************************ 00:17:09.667 START TEST nvmf_vfio_user_nvme_compliance 00:17:09.667 ************************************ 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:09.667 * Looking for test storage... 00:17:09.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1272069 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1272069' 00:17:09.667 Process pid: 1272069 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1272069 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1272069 ']' 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.667 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:09.668 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.928 [2024-07-25 10:05:48.849024] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:17:09.928 [2024-07-25 10:05:48.849099] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.928 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.928 [2024-07-25 10:05:48.913527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.928 [2024-07-25 10:05:48.987778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.928 [2024-07-25 10:05:48.987816] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.928 [2024-07-25 10:05:48.987824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.928 [2024-07-25 10:05:48.987831] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.928 [2024-07-25 10:05:48.987836] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.928 [2024-07-25 10:05:48.987982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.928 [2024-07-25 10:05:48.988110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.928 [2024-07-25 10:05:48.988113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.500 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:10.500 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:17:10.500 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:11.885 malloc0 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:11.885 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.886 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:11.886 EAL: No free 2048 kB hugepages reported on node 1 00:17:11.886 00:17:11.886 00:17:11.886 CUnit - A unit testing framework for C - Version 2.1-3 00:17:11.886 http://cunit.sourceforge.net/ 00:17:11.886 00:17:11.886 00:17:11.886 Suite: nvme_compliance 00:17:11.886 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 10:05:50.878371] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.886 [2024-07-25 10:05:50.879712] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:11.886 [2024-07-25 10:05:50.879723] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:11.886 [2024-07-25 10:05:50.879727] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:11.886 [2024-07-25 10:05:50.881388] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.886 passed 00:17:11.886 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 10:05:50.976002] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.886 [2024-07-25 10:05:50.979018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.147 passed 00:17:12.147 Test: admin_identify_ns ...[2024-07-25 10:05:51.075250] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.147 [2024-07-25 10:05:51.136211] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:12.147 [2024-07-25 10:05:51.144212] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:12.147 [2024-07-25 10:05:51.165340] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.147 passed 00:17:12.147 Test: admin_get_features_mandatory_features ...[2024-07-25 10:05:51.258334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.147 [2024-07-25 10:05:51.262361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.408 passed 00:17:12.408 Test: admin_get_features_optional_features ...[2024-07-25 10:05:51.354901] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.408 [2024-07-25 10:05:51.357917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.408 passed 00:17:12.408 Test: admin_set_features_number_of_queues ...[2024-07-25 10:05:51.451053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.668 [2024-07-25 10:05:51.555310] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.668 passed 00:17:12.668 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 10:05:51.648972] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.668 [2024-07-25 10:05:51.651983] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.668 passed 00:17:12.668 Test: admin_get_log_page_with_lpo ...[2024-07-25 10:05:51.744093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.929 [2024-07-25 10:05:51.811210] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:12.929 [2024-07-25 10:05:51.824337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.929 passed 00:17:12.929 Test: fabric_property_get ...[2024-07-25 10:05:51.917928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.929 [2024-07-25 10:05:51.919168] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:12.929 [2024-07-25 10:05:51.920952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.929 passed 00:17:12.929 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 10:05:52.015495] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.929 [2024-07-25 10:05:52.016751] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:12.929 [2024-07-25 10:05:52.018517] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.929 passed 00:17:13.190 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 10:05:52.112449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.190 [2024-07-25 10:05:52.196207] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:13.190 [2024-07-25 10:05:52.212210] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:13.190 [2024-07-25 10:05:52.217289] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.190 passed 00:17:13.190 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 10:05:52.309277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.190 [2024-07-25 10:05:52.310524] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:13.190 [2024-07-25 10:05:52.312299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.451 passed 00:17:13.451 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 10:05:52.404404] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.451 [2024-07-25 10:05:52.480221] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:13.451 [2024-07-25 10:05:52.504207] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:13.451 [2024-07-25 10:05:52.509296] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.451 passed 00:17:13.712 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 10:05:52.603264] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.712 [2024-07-25 10:05:52.604506] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:13.712 [2024-07-25 10:05:52.604526] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:13.712 [2024-07-25 10:05:52.606282] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.712 passed 00:17:13.712 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 10:05:52.699431] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.712 [2024-07-25 10:05:52.791212] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:13.712 [2024-07-25 10:05:52.799210] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:13.712 [2024-07-25 10:05:52.807219] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:13.712 [2024-07-25 10:05:52.815210] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:13.712 [2024-07-25 10:05:52.844294] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.973 passed 00:17:13.973 Test: admin_create_io_sq_verify_pc ...[2024-07-25 10:05:52.938310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.973 [2024-07-25 10:05:52.957216] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:13.973 [2024-07-25 10:05:52.974483] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.973 passed 00:17:13.973 Test: admin_create_io_qp_max_qps ...[2024-07-25 10:05:53.063011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.356 [2024-07-25 10:05:54.175209] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:15.616 [2024-07-25 10:05:54.562190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.616 passed 00:17:15.616 Test: admin_create_io_sq_shared_cq ...[2024-07-25 10:05:54.654367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:15.877 [2024-07-25 10:05:54.788207] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:15.877 [2024-07-25 10:05:54.825266] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:15.877 passed 00:17:15.877 00:17:15.877 Run Summary: Type Total Ran Passed Failed Inactive 00:17:15.877 suites 1 1 n/a 0 0 00:17:15.877 tests 18 18 18 0 0 00:17:15.877 asserts 360 360 360 0 n/a 00:17:15.877 00:17:15.877 Elapsed time = 1.656 seconds 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1272069 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1272069 ']' 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1272069 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1272069 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1272069' 00:17:15.877 killing process with pid 1272069 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1272069 00:17:15.877 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1272069 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:16.138 00:17:16.138 real 0m6.414s 00:17:16.138 user 0m18.354s 00:17:16.138 sys 0m0.460s 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:16.138 ************************************ 00:17:16.138 END TEST nvmf_vfio_user_nvme_compliance 00:17:16.138 ************************************ 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:16.138 ************************************ 00:17:16.138 START TEST nvmf_vfio_user_fuzz 00:17:16.138 ************************************ 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:16.138 * Looking for test storage... 00:17:16.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.138 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.398 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1273460 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1273460' 00:17:16.399 Process pid: 1273460 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1273460 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1273460 ']' 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:16.399 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:17.396 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:17.396 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:17.396 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:17.993 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:17.993 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.993 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:17.993 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.993 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:18.253 malloc0 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:18.253 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:50.374 Fuzzing completed. Shutting down the fuzz application 00:17:50.374 00:17:50.374 Dumping successful admin opcodes: 00:17:50.374 8, 9, 10, 24, 00:17:50.374 Dumping successful io opcodes: 00:17:50.374 0, 00:17:50.374 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1147397, total successful commands: 4521, random_seed: 864289152 00:17:50.374 NS: 0x200003a1ef00 admin qp, Total commands completed: 144488, total successful commands: 1173, random_seed: 546682752 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1273460 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1273460 ']' 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1273460 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1273460 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1273460' 00:17:50.374 killing process with pid 1273460 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1273460 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1273460 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:50.374 00:17:50.374 real 0m33.690s 00:17:50.374 user 0m38.462s 00:17:50.374 sys 0m25.773s 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:50.374 ************************************ 00:17:50.374 END TEST nvmf_vfio_user_fuzz 00:17:50.374 ************************************ 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:50.374 ************************************ 00:17:50.374 START TEST nvmf_auth_target 00:17:50.374 ************************************ 00:17:50.374 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:50.374 * Looking for test storage... 00:17:50.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.374 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:50.375 10:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:56.970 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:56.970 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:56.970 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:56.970 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:56.970 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:56.971 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.971 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.971 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:57.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:17:57.233 00:17:57.233 --- 10.0.0.2 ping statistics --- 00:17:57.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.233 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:57.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:17:57.233 00:17:57.233 --- 10.0.0.1 ping statistics --- 00:17:57.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.233 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1284239 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1284239 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1284239 ']' 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.233 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1284365 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=37c1fc8c85fba99508e064c2c804098268b0d3d1854c0c33 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.F0J 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 37c1fc8c85fba99508e064c2c804098268b0d3d1854c0c33 0 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 37c1fc8c85fba99508e064c2c804098268b0d3d1854c0c33 0 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=37c1fc8c85fba99508e064c2c804098268b0d3d1854c0c33 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.F0J 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.F0J 00:17:58.179 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.F0J 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ff7b66008af10f1d81c7381f53ebb0639a4928200beec7830bf10b54b890711a 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.L4c 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ff7b66008af10f1d81c7381f53ebb0639a4928200beec7830bf10b54b890711a 3 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ff7b66008af10f1d81c7381f53ebb0639a4928200beec7830bf10b54b890711a 3 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ff7b66008af10f1d81c7381f53ebb0639a4928200beec7830bf10b54b890711a 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.L4c 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.L4c 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.L4c 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b5d8805a85b9432f98e50d764a90aa2a 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.C8q 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b5d8805a85b9432f98e50d764a90aa2a 1 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b5d8805a85b9432f98e50d764a90aa2a 1 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b5d8805a85b9432f98e50d764a90aa2a 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.C8q 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.C8q 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.C8q 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3f2e87e2b802b9f14f476a9d334043c9bb75d59eda3cc128 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.gVt 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3f2e87e2b802b9f14f476a9d334043c9bb75d59eda3cc128 2 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3f2e87e2b802b9f14f476a9d334043c9bb75d59eda3cc128 2 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3f2e87e2b802b9f14f476a9d334043c9bb75d59eda3cc128 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.gVt 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.gVt 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.gVt 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:17:58.180 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=288bdb1b50e4bcc9f90ba83288f3cb75a01c8680f55d4f99 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Kka 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 288bdb1b50e4bcc9f90ba83288f3cb75a01c8680f55d4f99 2 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 288bdb1b50e4bcc9f90ba83288f3cb75a01c8680f55d4f99 2 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=288bdb1b50e4bcc9f90ba83288f3cb75a01c8680f55d4f99 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Kka 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Kka 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.Kka 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=00e82593f62457d019a12a6a382625c2 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.FZs 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 00e82593f62457d019a12a6a382625c2 1 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 00e82593f62457d019a12a6a382625c2 1 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=00e82593f62457d019a12a6a382625c2 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.FZs 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.FZs 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.FZs 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cbd8e3103855449bac6c2c63d7a18546bc4d552345967e1dbb040519de835ffd 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.5lj 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cbd8e3103855449bac6c2c63d7a18546bc4d552345967e1dbb040519de835ffd 3 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cbd8e3103855449bac6c2c63d7a18546bc4d552345967e1dbb040519de835ffd 3 00:17:58.442 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cbd8e3103855449bac6c2c63d7a18546bc4d552345967e1dbb040519de835ffd 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.5lj 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.5lj 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.5lj 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1284239 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1284239 ']' 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.443 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1284365 /var/tmp/host.sock 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1284365 ']' 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:58.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.704 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.F0J 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.705 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.F0J 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.F0J 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.L4c ]] 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.L4c 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.966 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.967 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.967 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.L4c 00:17:58.967 10:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.L4c 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.C8q 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.C8q 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.C8q 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.gVt ]] 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gVt 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gVt 00:17:59.228 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gVt 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Kka 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Kka 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Kka 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.FZs ]] 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FZs 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FZs 00:17:59.489 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FZs 00:17:59.750 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.5lj 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.5lj 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.5lj 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.751 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.011 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.011 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:00.011 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:00.011 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:00.011 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:00.011 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:00.011 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:00.011 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.012 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.012 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.012 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.012 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.012 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.012 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:00.273 00:18:00.273 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.273 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.273 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.535 { 00:18:00.535 "cntlid": 1, 00:18:00.535 "qid": 0, 00:18:00.535 "state": "enabled", 00:18:00.535 "thread": "nvmf_tgt_poll_group_000", 00:18:00.535 "listen_address": { 00:18:00.535 "trtype": "TCP", 00:18:00.535 "adrfam": "IPv4", 00:18:00.535 "traddr": "10.0.0.2", 00:18:00.535 "trsvcid": "4420" 00:18:00.535 }, 00:18:00.535 "peer_address": { 00:18:00.535 "trtype": "TCP", 00:18:00.535 "adrfam": "IPv4", 00:18:00.535 "traddr": "10.0.0.1", 00:18:00.535 "trsvcid": "52818" 00:18:00.535 }, 00:18:00.535 "auth": { 00:18:00.535 "state": "completed", 00:18:00.535 "digest": "sha256", 00:18:00.535 "dhgroup": "null" 00:18:00.535 } 00:18:00.535 } 00:18:00.535 ]' 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.535 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.796 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:01.368 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.630 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.891 00:18:01.891 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.891 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.891 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.153 { 00:18:02.153 "cntlid": 3, 00:18:02.153 "qid": 0, 00:18:02.153 "state": "enabled", 00:18:02.153 "thread": "nvmf_tgt_poll_group_000", 00:18:02.153 "listen_address": { 00:18:02.153 "trtype": "TCP", 00:18:02.153 "adrfam": "IPv4", 00:18:02.153 "traddr": "10.0.0.2", 00:18:02.153 "trsvcid": "4420" 00:18:02.153 }, 00:18:02.153 "peer_address": { 00:18:02.153 "trtype": "TCP", 00:18:02.153 "adrfam": "IPv4", 00:18:02.153 "traddr": "10.0.0.1", 00:18:02.153 "trsvcid": "52840" 00:18:02.153 }, 00:18:02.153 "auth": { 00:18:02.153 "state": "completed", 00:18:02.153 "digest": "sha256", 00:18:02.153 "dhgroup": "null" 00:18:02.153 } 00:18:02.153 } 00:18:02.153 ]' 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.153 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.414 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:02.985 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.985 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.244 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.244 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.244 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.245 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.505 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.505 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.766 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.767 { 00:18:03.767 "cntlid": 5, 00:18:03.767 "qid": 0, 00:18:03.767 "state": "enabled", 00:18:03.767 "thread": "nvmf_tgt_poll_group_000", 00:18:03.767 "listen_address": { 00:18:03.767 "trtype": "TCP", 00:18:03.767 "adrfam": "IPv4", 00:18:03.767 "traddr": "10.0.0.2", 00:18:03.767 "trsvcid": "4420" 00:18:03.767 }, 00:18:03.767 "peer_address": { 00:18:03.767 "trtype": "TCP", 00:18:03.767 "adrfam": "IPv4", 00:18:03.767 "traddr": "10.0.0.1", 00:18:03.767 "trsvcid": "52872" 00:18:03.767 }, 00:18:03.767 "auth": { 00:18:03.767 "state": "completed", 00:18:03.767 "digest": "sha256", 00:18:03.767 "dhgroup": "null" 00:18:03.767 } 00:18:03.767 } 00:18:03.767 ]' 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.767 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.027 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:04.598 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.599 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:04.859 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.119 00:18:05.119 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:05.119 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:05.119 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:05.380 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:05.380 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:05.381 { 00:18:05.381 "cntlid": 7, 00:18:05.381 "qid": 0, 00:18:05.381 "state": "enabled", 00:18:05.381 "thread": "nvmf_tgt_poll_group_000", 00:18:05.381 "listen_address": { 00:18:05.381 "trtype": "TCP", 00:18:05.381 "adrfam": "IPv4", 00:18:05.381 "traddr": "10.0.0.2", 00:18:05.381 "trsvcid": "4420" 00:18:05.381 }, 00:18:05.381 "peer_address": { 00:18:05.381 "trtype": "TCP", 00:18:05.381 "adrfam": "IPv4", 00:18:05.381 "traddr": "10.0.0.1", 00:18:05.381 "trsvcid": "34334" 00:18:05.381 }, 00:18:05.381 "auth": { 00:18:05.381 "state": "completed", 00:18:05.381 "digest": "sha256", 00:18:05.381 "dhgroup": "null" 00:18:05.381 } 00:18:05.381 } 00:18:05.381 ]' 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.381 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.685 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.270 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.531 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.792 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.792 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.053 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.053 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.053 { 00:18:07.053 "cntlid": 9, 00:18:07.053 "qid": 0, 00:18:07.053 "state": "enabled", 00:18:07.053 "thread": "nvmf_tgt_poll_group_000", 00:18:07.053 "listen_address": { 00:18:07.053 "trtype": "TCP", 00:18:07.053 "adrfam": "IPv4", 00:18:07.053 "traddr": "10.0.0.2", 00:18:07.053 "trsvcid": "4420" 00:18:07.053 }, 00:18:07.053 "peer_address": { 00:18:07.053 "trtype": "TCP", 00:18:07.053 "adrfam": "IPv4", 00:18:07.053 "traddr": "10.0.0.1", 00:18:07.053 "trsvcid": "34364" 00:18:07.053 }, 00:18:07.053 "auth": { 00:18:07.053 "state": "completed", 00:18:07.053 "digest": "sha256", 00:18:07.053 "dhgroup": "ffdhe2048" 00:18:07.053 } 00:18:07.053 } 00:18:07.053 ]' 00:18:07.053 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.053 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:07.053 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.053 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:07.053 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:07.053 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.053 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.053 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.314 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:07.886 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.147 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.408 00:18:08.408 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.408 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.408 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.669 { 00:18:08.669 "cntlid": 11, 00:18:08.669 "qid": 0, 00:18:08.669 "state": "enabled", 00:18:08.669 "thread": "nvmf_tgt_poll_group_000", 00:18:08.669 "listen_address": { 00:18:08.669 "trtype": "TCP", 00:18:08.669 "adrfam": "IPv4", 00:18:08.669 "traddr": "10.0.0.2", 00:18:08.669 "trsvcid": "4420" 00:18:08.669 }, 00:18:08.669 "peer_address": { 00:18:08.669 "trtype": "TCP", 00:18:08.669 "adrfam": "IPv4", 00:18:08.669 "traddr": "10.0.0.1", 00:18:08.669 "trsvcid": "34380" 00:18:08.669 }, 00:18:08.669 "auth": { 00:18:08.669 "state": "completed", 00:18:08.669 "digest": "sha256", 00:18:08.669 "dhgroup": "ffdhe2048" 00:18:08.669 } 00:18:08.669 } 00:18:08.669 ]' 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.669 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.930 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:09.500 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.500 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.500 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.500 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.760 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.021 00:18:10.021 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.021 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.021 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:10.282 { 00:18:10.282 "cntlid": 13, 00:18:10.282 "qid": 0, 00:18:10.282 "state": "enabled", 00:18:10.282 "thread": "nvmf_tgt_poll_group_000", 00:18:10.282 "listen_address": { 00:18:10.282 "trtype": "TCP", 00:18:10.282 "adrfam": "IPv4", 00:18:10.282 "traddr": "10.0.0.2", 00:18:10.282 "trsvcid": "4420" 00:18:10.282 }, 00:18:10.282 "peer_address": { 00:18:10.282 "trtype": "TCP", 00:18:10.282 "adrfam": "IPv4", 00:18:10.282 "traddr": "10.0.0.1", 00:18:10.282 "trsvcid": "34420" 00:18:10.282 }, 00:18:10.282 "auth": { 00:18:10.282 "state": "completed", 00:18:10.282 "digest": "sha256", 00:18:10.282 "dhgroup": "ffdhe2048" 00:18:10.282 } 00:18:10.282 } 00:18:10.282 ]' 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.282 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.543 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.484 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:11.746 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.746 { 00:18:11.746 "cntlid": 15, 00:18:11.746 "qid": 0, 00:18:11.746 "state": "enabled", 00:18:11.746 "thread": "nvmf_tgt_poll_group_000", 00:18:11.746 "listen_address": { 00:18:11.746 "trtype": "TCP", 00:18:11.746 "adrfam": "IPv4", 00:18:11.746 "traddr": "10.0.0.2", 00:18:11.746 "trsvcid": "4420" 00:18:11.746 }, 00:18:11.746 "peer_address": { 00:18:11.746 "trtype": "TCP", 00:18:11.746 "adrfam": "IPv4", 00:18:11.746 "traddr": "10.0.0.1", 00:18:11.746 "trsvcid": "34456" 00:18:11.746 }, 00:18:11.746 "auth": { 00:18:11.746 "state": "completed", 00:18:11.746 "digest": "sha256", 00:18:11.746 "dhgroup": "ffdhe2048" 00:18:11.746 } 00:18:11.746 } 00:18:11.746 ]' 00:18:11.746 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.007 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.268 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:12.842 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.103 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:13.364 00:18:13.364 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:13.364 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.364 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.624 { 00:18:13.624 "cntlid": 17, 00:18:13.624 "qid": 0, 00:18:13.624 "state": "enabled", 00:18:13.624 "thread": "nvmf_tgt_poll_group_000", 00:18:13.624 "listen_address": { 00:18:13.624 "trtype": "TCP", 00:18:13.624 "adrfam": "IPv4", 00:18:13.624 "traddr": "10.0.0.2", 00:18:13.624 "trsvcid": "4420" 00:18:13.624 }, 00:18:13.624 "peer_address": { 00:18:13.624 "trtype": "TCP", 00:18:13.624 "adrfam": "IPv4", 00:18:13.624 "traddr": "10.0.0.1", 00:18:13.624 "trsvcid": "34478" 00:18:13.624 }, 00:18:13.624 "auth": { 00:18:13.624 "state": "completed", 00:18:13.624 "digest": "sha256", 00:18:13.624 "dhgroup": "ffdhe3072" 00:18:13.624 } 00:18:13.624 } 00:18:13.624 ]' 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.624 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.884 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.454 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.714 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.974 00:18:14.974 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.974 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.974 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:15.234 { 00:18:15.234 "cntlid": 19, 00:18:15.234 "qid": 0, 00:18:15.234 "state": "enabled", 00:18:15.234 "thread": "nvmf_tgt_poll_group_000", 00:18:15.234 "listen_address": { 00:18:15.234 "trtype": "TCP", 00:18:15.234 "adrfam": "IPv4", 00:18:15.234 "traddr": "10.0.0.2", 00:18:15.234 "trsvcid": "4420" 00:18:15.234 }, 00:18:15.234 "peer_address": { 00:18:15.234 "trtype": "TCP", 00:18:15.234 "adrfam": "IPv4", 00:18:15.234 "traddr": "10.0.0.1", 00:18:15.234 "trsvcid": "49816" 00:18:15.234 }, 00:18:15.234 "auth": { 00:18:15.234 "state": "completed", 00:18:15.234 "digest": "sha256", 00:18:15.234 "dhgroup": "ffdhe3072" 00:18:15.234 } 00:18:15.234 } 00:18:15.234 ]' 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.234 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.235 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.235 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.496 10:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.438 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:16.700 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.700 { 00:18:16.700 "cntlid": 21, 00:18:16.700 "qid": 0, 00:18:16.700 "state": "enabled", 00:18:16.700 "thread": "nvmf_tgt_poll_group_000", 00:18:16.700 "listen_address": { 00:18:16.700 "trtype": "TCP", 00:18:16.700 "adrfam": "IPv4", 00:18:16.700 "traddr": "10.0.0.2", 00:18:16.700 "trsvcid": "4420" 00:18:16.700 }, 00:18:16.700 "peer_address": { 00:18:16.700 "trtype": "TCP", 00:18:16.700 "adrfam": "IPv4", 00:18:16.700 "traddr": "10.0.0.1", 00:18:16.700 "trsvcid": "49850" 00:18:16.700 }, 00:18:16.700 "auth": { 00:18:16.700 "state": "completed", 00:18:16.700 "digest": "sha256", 00:18:16.700 "dhgroup": "ffdhe3072" 00:18:16.700 } 00:18:16.700 } 00:18:16.700 ]' 00:18:16.700 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.960 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.221 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.792 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.053 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.314 00:18:18.314 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.314 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.314 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.575 { 00:18:18.575 "cntlid": 23, 00:18:18.575 "qid": 0, 00:18:18.575 "state": "enabled", 00:18:18.575 "thread": "nvmf_tgt_poll_group_000", 00:18:18.575 "listen_address": { 00:18:18.575 "trtype": "TCP", 00:18:18.575 "adrfam": "IPv4", 00:18:18.575 "traddr": "10.0.0.2", 00:18:18.575 "trsvcid": "4420" 00:18:18.575 }, 00:18:18.575 "peer_address": { 00:18:18.575 "trtype": "TCP", 00:18:18.575 "adrfam": "IPv4", 00:18:18.575 "traddr": "10.0.0.1", 00:18:18.575 "trsvcid": "49880" 00:18:18.575 }, 00:18:18.575 "auth": { 00:18:18.575 "state": "completed", 00:18:18.575 "digest": "sha256", 00:18:18.575 "dhgroup": "ffdhe3072" 00:18:18.575 } 00:18:18.575 } 00:18:18.575 ]' 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.575 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.838 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:19.410 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.672 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.933 00:18:19.933 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:19.933 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:19.933 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.194 { 00:18:20.194 "cntlid": 25, 00:18:20.194 "qid": 0, 00:18:20.194 "state": "enabled", 00:18:20.194 "thread": "nvmf_tgt_poll_group_000", 00:18:20.194 "listen_address": { 00:18:20.194 "trtype": "TCP", 00:18:20.194 "adrfam": "IPv4", 00:18:20.194 "traddr": "10.0.0.2", 00:18:20.194 "trsvcid": "4420" 00:18:20.194 }, 00:18:20.194 "peer_address": { 00:18:20.194 "trtype": "TCP", 00:18:20.194 "adrfam": "IPv4", 00:18:20.194 "traddr": "10.0.0.1", 00:18:20.194 "trsvcid": "49900" 00:18:20.194 }, 00:18:20.194 "auth": { 00:18:20.194 "state": "completed", 00:18:20.194 "digest": "sha256", 00:18:20.194 "dhgroup": "ffdhe4096" 00:18:20.194 } 00:18:20.194 } 00:18:20.194 ]' 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.194 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.455 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.399 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.662 00:18:21.662 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.662 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.662 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.923 { 00:18:21.923 "cntlid": 27, 00:18:21.923 "qid": 0, 00:18:21.923 "state": "enabled", 00:18:21.923 "thread": "nvmf_tgt_poll_group_000", 00:18:21.923 "listen_address": { 00:18:21.923 "trtype": "TCP", 00:18:21.923 "adrfam": "IPv4", 00:18:21.923 "traddr": "10.0.0.2", 00:18:21.923 "trsvcid": "4420" 00:18:21.923 }, 00:18:21.923 "peer_address": { 00:18:21.923 "trtype": "TCP", 00:18:21.923 "adrfam": "IPv4", 00:18:21.923 "traddr": "10.0.0.1", 00:18:21.923 "trsvcid": "49926" 00:18:21.923 }, 00:18:21.923 "auth": { 00:18:21.923 "state": "completed", 00:18:21.923 "digest": "sha256", 00:18:21.923 "dhgroup": "ffdhe4096" 00:18:21.923 } 00:18:21.923 } 00:18:21.923 ]' 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.923 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.183 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.786 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.046 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.047 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.047 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.307 00:18:23.307 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.307 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.307 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.569 { 00:18:23.569 "cntlid": 29, 00:18:23.569 "qid": 0, 00:18:23.569 "state": "enabled", 00:18:23.569 "thread": "nvmf_tgt_poll_group_000", 00:18:23.569 "listen_address": { 00:18:23.569 "trtype": "TCP", 00:18:23.569 "adrfam": "IPv4", 00:18:23.569 "traddr": "10.0.0.2", 00:18:23.569 "trsvcid": "4420" 00:18:23.569 }, 00:18:23.569 "peer_address": { 00:18:23.569 "trtype": "TCP", 00:18:23.569 "adrfam": "IPv4", 00:18:23.569 "traddr": "10.0.0.1", 00:18:23.569 "trsvcid": "49948" 00:18:23.569 }, 00:18:23.569 "auth": { 00:18:23.569 "state": "completed", 00:18:23.569 "digest": "sha256", 00:18:23.569 "dhgroup": "ffdhe4096" 00:18:23.569 } 00:18:23.569 } 00:18:23.569 ]' 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.569 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.830 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.773 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:25.034 00:18:25.034 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:25.034 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.034 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.295 { 00:18:25.295 "cntlid": 31, 00:18:25.295 "qid": 0, 00:18:25.295 "state": "enabled", 00:18:25.295 "thread": "nvmf_tgt_poll_group_000", 00:18:25.295 "listen_address": { 00:18:25.295 "trtype": "TCP", 00:18:25.295 "adrfam": "IPv4", 00:18:25.295 "traddr": "10.0.0.2", 00:18:25.295 "trsvcid": "4420" 00:18:25.295 }, 00:18:25.295 "peer_address": { 00:18:25.295 "trtype": "TCP", 00:18:25.295 "adrfam": "IPv4", 00:18:25.295 "traddr": "10.0.0.1", 00:18:25.295 "trsvcid": "47802" 00:18:25.295 }, 00:18:25.295 "auth": { 00:18:25.295 "state": "completed", 00:18:25.295 "digest": "sha256", 00:18:25.295 "dhgroup": "ffdhe4096" 00:18:25.295 } 00:18:25.295 } 00:18:25.295 ]' 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.295 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.556 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.129 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.390 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.651 00:18:26.651 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.651 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.651 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.912 { 00:18:26.912 "cntlid": 33, 00:18:26.912 "qid": 0, 00:18:26.912 "state": "enabled", 00:18:26.912 "thread": "nvmf_tgt_poll_group_000", 00:18:26.912 "listen_address": { 00:18:26.912 "trtype": "TCP", 00:18:26.912 "adrfam": "IPv4", 00:18:26.912 "traddr": "10.0.0.2", 00:18:26.912 "trsvcid": "4420" 00:18:26.912 }, 00:18:26.912 "peer_address": { 00:18:26.912 "trtype": "TCP", 00:18:26.912 "adrfam": "IPv4", 00:18:26.912 "traddr": "10.0.0.1", 00:18:26.912 "trsvcid": "47826" 00:18:26.912 }, 00:18:26.912 "auth": { 00:18:26.912 "state": "completed", 00:18:26.912 "digest": "sha256", 00:18:26.912 "dhgroup": "ffdhe6144" 00:18:26.912 } 00:18:26.912 } 00:18:26.912 ]' 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.912 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.912 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:26.912 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:27.172 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.172 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.172 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.172 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:28.114 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:28.114 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.115 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.115 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.115 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.115 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.115 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.115 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.686 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:28.686 { 00:18:28.686 "cntlid": 35, 00:18:28.686 "qid": 0, 00:18:28.686 "state": "enabled", 00:18:28.686 "thread": "nvmf_tgt_poll_group_000", 00:18:28.686 "listen_address": { 00:18:28.686 "trtype": "TCP", 00:18:28.686 "adrfam": "IPv4", 00:18:28.686 "traddr": "10.0.0.2", 00:18:28.686 "trsvcid": "4420" 00:18:28.686 }, 00:18:28.686 "peer_address": { 00:18:28.686 "trtype": "TCP", 00:18:28.686 "adrfam": "IPv4", 00:18:28.686 "traddr": "10.0.0.1", 00:18:28.686 "trsvcid": "47854" 00:18:28.686 }, 00:18:28.686 "auth": { 00:18:28.686 "state": "completed", 00:18:28.686 "digest": "sha256", 00:18:28.686 "dhgroup": "ffdhe6144" 00:18:28.686 } 00:18:28.686 } 00:18:28.686 ]' 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:28.686 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.947 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.947 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.947 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.947 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.891 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.152 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:30.413 { 00:18:30.413 "cntlid": 37, 00:18:30.413 "qid": 0, 00:18:30.413 "state": "enabled", 00:18:30.413 "thread": "nvmf_tgt_poll_group_000", 00:18:30.413 "listen_address": { 00:18:30.413 "trtype": "TCP", 00:18:30.413 "adrfam": "IPv4", 00:18:30.413 "traddr": "10.0.0.2", 00:18:30.413 "trsvcid": "4420" 00:18:30.413 }, 00:18:30.413 "peer_address": { 00:18:30.413 "trtype": "TCP", 00:18:30.413 "adrfam": "IPv4", 00:18:30.413 "traddr": "10.0.0.1", 00:18:30.413 "trsvcid": "47890" 00:18:30.413 }, 00:18:30.413 "auth": { 00:18:30.413 "state": "completed", 00:18:30.413 "digest": "sha256", 00:18:30.413 "dhgroup": "ffdhe6144" 00:18:30.413 } 00:18:30.413 } 00:18:30.413 ]' 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.413 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:30.675 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.675 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.675 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.675 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.617 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.618 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.618 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.618 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.618 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.879 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.140 { 00:18:32.140 "cntlid": 39, 00:18:32.140 "qid": 0, 00:18:32.140 "state": "enabled", 00:18:32.140 "thread": "nvmf_tgt_poll_group_000", 00:18:32.140 "listen_address": { 00:18:32.140 "trtype": "TCP", 00:18:32.140 "adrfam": "IPv4", 00:18:32.140 "traddr": "10.0.0.2", 00:18:32.140 "trsvcid": "4420" 00:18:32.140 }, 00:18:32.140 "peer_address": { 00:18:32.140 "trtype": "TCP", 00:18:32.140 "adrfam": "IPv4", 00:18:32.140 "traddr": "10.0.0.1", 00:18:32.140 "trsvcid": "47908" 00:18:32.140 }, 00:18:32.140 "auth": { 00:18:32.140 "state": "completed", 00:18:32.140 "digest": "sha256", 00:18:32.140 "dhgroup": "ffdhe6144" 00:18:32.140 } 00:18:32.140 } 00:18:32.140 ]' 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.140 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.401 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.401 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.401 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.401 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.401 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.401 10:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.345 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.917 00:18:33.917 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:33.917 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:33.917 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.178 { 00:18:34.178 "cntlid": 41, 00:18:34.178 "qid": 0, 00:18:34.178 "state": "enabled", 00:18:34.178 "thread": "nvmf_tgt_poll_group_000", 00:18:34.178 "listen_address": { 00:18:34.178 "trtype": "TCP", 00:18:34.178 "adrfam": "IPv4", 00:18:34.178 "traddr": "10.0.0.2", 00:18:34.178 "trsvcid": "4420" 00:18:34.178 }, 00:18:34.178 "peer_address": { 00:18:34.178 "trtype": "TCP", 00:18:34.178 "adrfam": "IPv4", 00:18:34.178 "traddr": "10.0.0.1", 00:18:34.178 "trsvcid": "47934" 00:18:34.178 }, 00:18:34.178 "auth": { 00:18:34.178 "state": "completed", 00:18:34.178 "digest": "sha256", 00:18:34.178 "dhgroup": "ffdhe8192" 00:18:34.178 } 00:18:34.178 } 00:18:34.178 ]' 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.178 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.448 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:35.019 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.280 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.280 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.851 00:18:35.851 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.851 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:35.851 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.112 { 00:18:36.112 "cntlid": 43, 00:18:36.112 "qid": 0, 00:18:36.112 "state": "enabled", 00:18:36.112 "thread": "nvmf_tgt_poll_group_000", 00:18:36.112 "listen_address": { 00:18:36.112 "trtype": "TCP", 00:18:36.112 "adrfam": "IPv4", 00:18:36.112 "traddr": "10.0.0.2", 00:18:36.112 "trsvcid": "4420" 00:18:36.112 }, 00:18:36.112 "peer_address": { 00:18:36.112 "trtype": "TCP", 00:18:36.112 "adrfam": "IPv4", 00:18:36.112 "traddr": "10.0.0.1", 00:18:36.112 "trsvcid": "41034" 00:18:36.112 }, 00:18:36.112 "auth": { 00:18:36.112 "state": "completed", 00:18:36.112 "digest": "sha256", 00:18:36.112 "dhgroup": "ffdhe8192" 00:18:36.112 } 00:18:36.112 } 00:18:36.112 ]' 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.112 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.374 10:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.318 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.889 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.889 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.889 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.889 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.889 { 00:18:37.889 "cntlid": 45, 00:18:37.889 "qid": 0, 00:18:37.889 "state": "enabled", 00:18:37.889 "thread": "nvmf_tgt_poll_group_000", 00:18:37.889 "listen_address": { 00:18:37.889 "trtype": "TCP", 00:18:37.889 "adrfam": "IPv4", 00:18:37.889 "traddr": "10.0.0.2", 00:18:37.889 "trsvcid": "4420" 00:18:37.889 }, 00:18:37.889 "peer_address": { 00:18:37.889 "trtype": "TCP", 00:18:37.889 "adrfam": "IPv4", 00:18:37.889 "traddr": "10.0.0.1", 00:18:37.889 "trsvcid": "41054" 00:18:37.889 }, 00:18:37.889 "auth": { 00:18:37.889 "state": "completed", 00:18:37.889 "digest": "sha256", 00:18:37.889 "dhgroup": "ffdhe8192" 00:18:37.889 } 00:18:37.889 } 00:18:37.889 ]' 00:18:37.889 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.150 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.411 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.984 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.253 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:39.886 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.886 { 00:18:39.886 "cntlid": 47, 00:18:39.886 "qid": 0, 00:18:39.886 "state": "enabled", 00:18:39.886 "thread": "nvmf_tgt_poll_group_000", 00:18:39.886 "listen_address": { 00:18:39.886 "trtype": "TCP", 00:18:39.886 "adrfam": "IPv4", 00:18:39.886 "traddr": "10.0.0.2", 00:18:39.886 "trsvcid": "4420" 00:18:39.886 }, 00:18:39.886 "peer_address": { 00:18:39.886 "trtype": "TCP", 00:18:39.886 "adrfam": "IPv4", 00:18:39.886 "traddr": "10.0.0.1", 00:18:39.886 "trsvcid": "41072" 00:18:39.886 }, 00:18:39.886 "auth": { 00:18:39.886 "state": "completed", 00:18:39.886 "digest": "sha256", 00:18:39.886 "dhgroup": "ffdhe8192" 00:18:39.886 } 00:18:39.886 } 00:18:39.886 ]' 00:18:39.886 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.145 10:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.086 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:41.346 00:18:41.346 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.346 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.346 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.608 { 00:18:41.608 "cntlid": 49, 00:18:41.608 "qid": 0, 00:18:41.608 "state": "enabled", 00:18:41.608 "thread": "nvmf_tgt_poll_group_000", 00:18:41.608 "listen_address": { 00:18:41.608 "trtype": "TCP", 00:18:41.608 "adrfam": "IPv4", 00:18:41.608 "traddr": "10.0.0.2", 00:18:41.608 "trsvcid": "4420" 00:18:41.608 }, 00:18:41.608 "peer_address": { 00:18:41.608 "trtype": "TCP", 00:18:41.608 "adrfam": "IPv4", 00:18:41.608 "traddr": "10.0.0.1", 00:18:41.608 "trsvcid": "41112" 00:18:41.608 }, 00:18:41.608 "auth": { 00:18:41.608 "state": "completed", 00:18:41.608 "digest": "sha384", 00:18:41.608 "dhgroup": "null" 00:18:41.608 } 00:18:41.608 } 00:18:41.608 ]' 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.608 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.868 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.810 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.811 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.071 00:18:43.071 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.071 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.071 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.332 { 00:18:43.332 "cntlid": 51, 00:18:43.332 "qid": 0, 00:18:43.332 "state": "enabled", 00:18:43.332 "thread": "nvmf_tgt_poll_group_000", 00:18:43.332 "listen_address": { 00:18:43.332 "trtype": "TCP", 00:18:43.332 "adrfam": "IPv4", 00:18:43.332 "traddr": "10.0.0.2", 00:18:43.332 "trsvcid": "4420" 00:18:43.332 }, 00:18:43.332 "peer_address": { 00:18:43.332 "trtype": "TCP", 00:18:43.332 "adrfam": "IPv4", 00:18:43.332 "traddr": "10.0.0.1", 00:18:43.332 "trsvcid": "41142" 00:18:43.332 }, 00:18:43.332 "auth": { 00:18:43.332 "state": "completed", 00:18:43.332 "digest": "sha384", 00:18:43.332 "dhgroup": "null" 00:18:43.332 } 00:18:43.332 } 00:18:43.332 ]' 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.332 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.593 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:44.164 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.164 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.164 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.164 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.424 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.684 00:18:44.684 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.684 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.684 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.944 { 00:18:44.944 "cntlid": 53, 00:18:44.944 "qid": 0, 00:18:44.944 "state": "enabled", 00:18:44.944 "thread": "nvmf_tgt_poll_group_000", 00:18:44.944 "listen_address": { 00:18:44.944 "trtype": "TCP", 00:18:44.944 "adrfam": "IPv4", 00:18:44.944 "traddr": "10.0.0.2", 00:18:44.944 "trsvcid": "4420" 00:18:44.944 }, 00:18:44.944 "peer_address": { 00:18:44.944 "trtype": "TCP", 00:18:44.944 "adrfam": "IPv4", 00:18:44.944 "traddr": "10.0.0.1", 00:18:44.944 "trsvcid": "41168" 00:18:44.944 }, 00:18:44.944 "auth": { 00:18:44.944 "state": "completed", 00:18:44.944 "digest": "sha384", 00:18:44.944 "dhgroup": "null" 00:18:44.944 } 00:18:44.944 } 00:18:44.944 ]' 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.944 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.205 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:45.776 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.036 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.036 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.036 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.036 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.036 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:46.037 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.037 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.297 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:46.297 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.558 { 00:18:46.558 "cntlid": 55, 00:18:46.558 "qid": 0, 00:18:46.558 "state": "enabled", 00:18:46.558 "thread": "nvmf_tgt_poll_group_000", 00:18:46.558 "listen_address": { 00:18:46.558 "trtype": "TCP", 00:18:46.558 "adrfam": "IPv4", 00:18:46.558 "traddr": "10.0.0.2", 00:18:46.558 "trsvcid": "4420" 00:18:46.558 }, 00:18:46.558 "peer_address": { 00:18:46.558 "trtype": "TCP", 00:18:46.558 "adrfam": "IPv4", 00:18:46.558 "traddr": "10.0.0.1", 00:18:46.558 "trsvcid": "53224" 00:18:46.558 }, 00:18:46.558 "auth": { 00:18:46.558 "state": "completed", 00:18:46.558 "digest": "sha384", 00:18:46.558 "dhgroup": "null" 00:18:46.558 } 00:18:46.558 } 00:18:46.558 ]' 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:46.558 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.819 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:46.819 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.819 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.819 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.819 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.819 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.761 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:48.023 00:18:48.023 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.023 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.023 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.283 { 00:18:48.283 "cntlid": 57, 00:18:48.283 "qid": 0, 00:18:48.283 "state": "enabled", 00:18:48.283 "thread": "nvmf_tgt_poll_group_000", 00:18:48.283 "listen_address": { 00:18:48.283 "trtype": "TCP", 00:18:48.283 "adrfam": "IPv4", 00:18:48.283 "traddr": "10.0.0.2", 00:18:48.283 "trsvcid": "4420" 00:18:48.283 }, 00:18:48.283 "peer_address": { 00:18:48.283 "trtype": "TCP", 00:18:48.283 "adrfam": "IPv4", 00:18:48.283 "traddr": "10.0.0.1", 00:18:48.283 "trsvcid": "53238" 00:18:48.283 }, 00:18:48.283 "auth": { 00:18:48.283 "state": "completed", 00:18:48.283 "digest": "sha384", 00:18:48.283 "dhgroup": "ffdhe2048" 00:18:48.283 } 00:18:48.283 } 00:18:48.283 ]' 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.283 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.544 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.488 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.749 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.749 { 00:18:49.749 "cntlid": 59, 00:18:49.749 "qid": 0, 00:18:49.749 "state": "enabled", 00:18:49.749 "thread": "nvmf_tgt_poll_group_000", 00:18:49.749 "listen_address": { 00:18:49.749 "trtype": "TCP", 00:18:49.749 "adrfam": "IPv4", 00:18:49.749 "traddr": "10.0.0.2", 00:18:49.749 "trsvcid": "4420" 00:18:49.749 }, 00:18:49.749 "peer_address": { 00:18:49.749 "trtype": "TCP", 00:18:49.749 "adrfam": "IPv4", 00:18:49.749 "traddr": "10.0.0.1", 00:18:49.749 "trsvcid": "53258" 00:18:49.749 }, 00:18:49.749 "auth": { 00:18:49.749 "state": "completed", 00:18:49.749 "digest": "sha384", 00:18:49.749 "dhgroup": "ffdhe2048" 00:18:49.749 } 00:18:49.749 } 00:18:49.749 ]' 00:18:49.749 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.010 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.010 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.010 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.010 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.010 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.010 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.010 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.272 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.844 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:50.844 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.105 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.366 00:18:51.366 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:51.366 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:51.366 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.626 { 00:18:51.626 "cntlid": 61, 00:18:51.626 "qid": 0, 00:18:51.626 "state": "enabled", 00:18:51.626 "thread": "nvmf_tgt_poll_group_000", 00:18:51.626 "listen_address": { 00:18:51.626 "trtype": "TCP", 00:18:51.626 "adrfam": "IPv4", 00:18:51.626 "traddr": "10.0.0.2", 00:18:51.626 "trsvcid": "4420" 00:18:51.626 }, 00:18:51.626 "peer_address": { 00:18:51.626 "trtype": "TCP", 00:18:51.626 "adrfam": "IPv4", 00:18:51.626 "traddr": "10.0.0.1", 00:18:51.626 "trsvcid": "53280" 00:18:51.626 }, 00:18:51.626 "auth": { 00:18:51.626 "state": "completed", 00:18:51.626 "digest": "sha384", 00:18:51.626 "dhgroup": "ffdhe2048" 00:18:51.626 } 00:18:51.626 } 00:18:51.626 ]' 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.626 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.886 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:52.457 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.457 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:52.457 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.457 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.457 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.718 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.979 00:18:52.979 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.979 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.979 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:53.240 { 00:18:53.240 "cntlid": 63, 00:18:53.240 "qid": 0, 00:18:53.240 "state": "enabled", 00:18:53.240 "thread": "nvmf_tgt_poll_group_000", 00:18:53.240 "listen_address": { 00:18:53.240 "trtype": "TCP", 00:18:53.240 "adrfam": "IPv4", 00:18:53.240 "traddr": "10.0.0.2", 00:18:53.240 "trsvcid": "4420" 00:18:53.240 }, 00:18:53.240 "peer_address": { 00:18:53.240 "trtype": "TCP", 00:18:53.240 "adrfam": "IPv4", 00:18:53.240 "traddr": "10.0.0.1", 00:18:53.240 "trsvcid": "53304" 00:18:53.240 }, 00:18:53.240 "auth": { 00:18:53.240 "state": "completed", 00:18:53.240 "digest": "sha384", 00:18:53.240 "dhgroup": "ffdhe2048" 00:18:53.240 } 00:18:53.240 } 00:18:53.240 ]' 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.240 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.502 10:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.075 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:54.334 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:18:54.334 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.334 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:54.334 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.334 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.334 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.335 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.335 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.335 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.335 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.335 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.335 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.594 00:18:54.594 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.594 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.594 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.853 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.853 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.853 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.853 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.853 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.853 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.853 { 00:18:54.853 "cntlid": 65, 00:18:54.853 "qid": 0, 00:18:54.853 "state": "enabled", 00:18:54.853 "thread": "nvmf_tgt_poll_group_000", 00:18:54.853 "listen_address": { 00:18:54.853 "trtype": "TCP", 00:18:54.853 "adrfam": "IPv4", 00:18:54.853 "traddr": "10.0.0.2", 00:18:54.854 "trsvcid": "4420" 00:18:54.854 }, 00:18:54.854 "peer_address": { 00:18:54.854 "trtype": "TCP", 00:18:54.854 "adrfam": "IPv4", 00:18:54.854 "traddr": "10.0.0.1", 00:18:54.854 "trsvcid": "53328" 00:18:54.854 }, 00:18:54.854 "auth": { 00:18:54.854 "state": "completed", 00:18:54.854 "digest": "sha384", 00:18:54.854 "dhgroup": "ffdhe3072" 00:18:54.854 } 00:18:54.854 } 00:18:54.854 ]' 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.854 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.114 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:18:55.684 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.944 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.944 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.944 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.944 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.204 00:18:56.204 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.204 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.204 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.506 { 00:18:56.506 "cntlid": 67, 00:18:56.506 "qid": 0, 00:18:56.506 "state": "enabled", 00:18:56.506 "thread": "nvmf_tgt_poll_group_000", 00:18:56.506 "listen_address": { 00:18:56.506 "trtype": "TCP", 00:18:56.506 "adrfam": "IPv4", 00:18:56.506 "traddr": "10.0.0.2", 00:18:56.506 "trsvcid": "4420" 00:18:56.506 }, 00:18:56.506 "peer_address": { 00:18:56.506 "trtype": "TCP", 00:18:56.506 "adrfam": "IPv4", 00:18:56.506 "traddr": "10.0.0.1", 00:18:56.506 "trsvcid": "39936" 00:18:56.506 }, 00:18:56.506 "auth": { 00:18:56.506 "state": "completed", 00:18:56.506 "digest": "sha384", 00:18:56.506 "dhgroup": "ffdhe3072" 00:18:56.506 } 00:18:56.506 } 00:18:56.506 ]' 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.506 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.771 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.715 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.976 00:18:57.976 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.976 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.976 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.976 { 00:18:57.976 "cntlid": 69, 00:18:57.976 "qid": 0, 00:18:57.976 "state": "enabled", 00:18:57.976 "thread": "nvmf_tgt_poll_group_000", 00:18:57.976 "listen_address": { 00:18:57.976 "trtype": "TCP", 00:18:57.976 "adrfam": "IPv4", 00:18:57.976 "traddr": "10.0.0.2", 00:18:57.976 "trsvcid": "4420" 00:18:57.976 }, 00:18:57.976 "peer_address": { 00:18:57.976 "trtype": "TCP", 00:18:57.976 "adrfam": "IPv4", 00:18:57.976 "traddr": "10.0.0.1", 00:18:57.976 "trsvcid": "39946" 00:18:57.976 }, 00:18:57.976 "auth": { 00:18:57.976 "state": "completed", 00:18:57.976 "digest": "sha384", 00:18:57.976 "dhgroup": "ffdhe3072" 00:18:57.976 } 00:18:57.976 } 00:18:57.976 ]' 00:18:57.976 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.236 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.497 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.070 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.331 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:59.592 00:18:59.592 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.592 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.592 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.853 { 00:18:59.853 "cntlid": 71, 00:18:59.853 "qid": 0, 00:18:59.853 "state": "enabled", 00:18:59.853 "thread": "nvmf_tgt_poll_group_000", 00:18:59.853 "listen_address": { 00:18:59.853 "trtype": "TCP", 00:18:59.853 "adrfam": "IPv4", 00:18:59.853 "traddr": "10.0.0.2", 00:18:59.853 "trsvcid": "4420" 00:18:59.853 }, 00:18:59.853 "peer_address": { 00:18:59.853 "trtype": "TCP", 00:18:59.853 "adrfam": "IPv4", 00:18:59.853 "traddr": "10.0.0.1", 00:18:59.853 "trsvcid": "39980" 00:18:59.853 }, 00:18:59.853 "auth": { 00:18:59.853 "state": "completed", 00:18:59.853 "digest": "sha384", 00:18:59.853 "dhgroup": "ffdhe3072" 00:18:59.853 } 00:18:59.853 } 00:18:59.853 ]' 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.853 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.114 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.687 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:00.948 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:01.210 00:19:01.210 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.210 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.210 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.471 { 00:19:01.471 "cntlid": 73, 00:19:01.471 "qid": 0, 00:19:01.471 "state": "enabled", 00:19:01.471 "thread": "nvmf_tgt_poll_group_000", 00:19:01.471 "listen_address": { 00:19:01.471 "trtype": "TCP", 00:19:01.471 "adrfam": "IPv4", 00:19:01.471 "traddr": "10.0.0.2", 00:19:01.471 "trsvcid": "4420" 00:19:01.471 }, 00:19:01.471 "peer_address": { 00:19:01.471 "trtype": "TCP", 00:19:01.471 "adrfam": "IPv4", 00:19:01.471 "traddr": "10.0.0.1", 00:19:01.471 "trsvcid": "40012" 00:19:01.471 }, 00:19:01.471 "auth": { 00:19:01.471 "state": "completed", 00:19:01.471 "digest": "sha384", 00:19:01.471 "dhgroup": "ffdhe4096" 00:19:01.471 } 00:19:01.471 } 00:19:01.471 ]' 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.471 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.732 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.676 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:02.937 00:19:02.937 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.937 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.937 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.937 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.937 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.937 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.937 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.198 { 00:19:03.198 "cntlid": 75, 00:19:03.198 "qid": 0, 00:19:03.198 "state": "enabled", 00:19:03.198 "thread": "nvmf_tgt_poll_group_000", 00:19:03.198 "listen_address": { 00:19:03.198 "trtype": "TCP", 00:19:03.198 "adrfam": "IPv4", 00:19:03.198 "traddr": "10.0.0.2", 00:19:03.198 "trsvcid": "4420" 00:19:03.198 }, 00:19:03.198 "peer_address": { 00:19:03.198 "trtype": "TCP", 00:19:03.198 "adrfam": "IPv4", 00:19:03.198 "traddr": "10.0.0.1", 00:19:03.198 "trsvcid": "40042" 00:19:03.198 }, 00:19:03.198 "auth": { 00:19:03.198 "state": "completed", 00:19:03.198 "digest": "sha384", 00:19:03.198 "dhgroup": "ffdhe4096" 00:19:03.198 } 00:19:03.198 } 00:19:03.198 ]' 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.198 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.460 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.033 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.294 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:04.555 00:19:04.555 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.555 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.555 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.816 { 00:19:04.816 "cntlid": 77, 00:19:04.816 "qid": 0, 00:19:04.816 "state": "enabled", 00:19:04.816 "thread": "nvmf_tgt_poll_group_000", 00:19:04.816 "listen_address": { 00:19:04.816 "trtype": "TCP", 00:19:04.816 "adrfam": "IPv4", 00:19:04.816 "traddr": "10.0.0.2", 00:19:04.816 "trsvcid": "4420" 00:19:04.816 }, 00:19:04.816 "peer_address": { 00:19:04.816 "trtype": "TCP", 00:19:04.816 "adrfam": "IPv4", 00:19:04.816 "traddr": "10.0.0.1", 00:19:04.816 "trsvcid": "40058" 00:19:04.816 }, 00:19:04.816 "auth": { 00:19:04.816 "state": "completed", 00:19:04.816 "digest": "sha384", 00:19:04.816 "dhgroup": "ffdhe4096" 00:19:04.816 } 00:19:04.816 } 00:19:04.816 ]' 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.816 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.077 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.019 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:06.281 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.281 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.542 { 00:19:06.542 "cntlid": 79, 00:19:06.542 "qid": 0, 00:19:06.542 "state": "enabled", 00:19:06.542 "thread": "nvmf_tgt_poll_group_000", 00:19:06.542 "listen_address": { 00:19:06.542 "trtype": "TCP", 00:19:06.542 "adrfam": "IPv4", 00:19:06.542 "traddr": "10.0.0.2", 00:19:06.542 "trsvcid": "4420" 00:19:06.542 }, 00:19:06.542 "peer_address": { 00:19:06.542 "trtype": "TCP", 00:19:06.542 "adrfam": "IPv4", 00:19:06.542 "traddr": "10.0.0.1", 00:19:06.542 "trsvcid": "54536" 00:19:06.542 }, 00:19:06.542 "auth": { 00:19:06.542 "state": "completed", 00:19:06.542 "digest": "sha384", 00:19:06.542 "dhgroup": "ffdhe4096" 00:19:06.542 } 00:19:06.542 } 00:19:06.542 ]' 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.542 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.803 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.375 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.636 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.897 00:19:07.897 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.897 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.897 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.158 { 00:19:08.158 "cntlid": 81, 00:19:08.158 "qid": 0, 00:19:08.158 "state": "enabled", 00:19:08.158 "thread": "nvmf_tgt_poll_group_000", 00:19:08.158 "listen_address": { 00:19:08.158 "trtype": "TCP", 00:19:08.158 "adrfam": "IPv4", 00:19:08.158 "traddr": "10.0.0.2", 00:19:08.158 "trsvcid": "4420" 00:19:08.158 }, 00:19:08.158 "peer_address": { 00:19:08.158 "trtype": "TCP", 00:19:08.158 "adrfam": "IPv4", 00:19:08.158 "traddr": "10.0.0.1", 00:19:08.158 "trsvcid": "54576" 00:19:08.158 }, 00:19:08.158 "auth": { 00:19:08.158 "state": "completed", 00:19:08.158 "digest": "sha384", 00:19:08.158 "dhgroup": "ffdhe6144" 00:19:08.158 } 00:19:08.158 } 00:19:08.158 ]' 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:08.158 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.419 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.419 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.419 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.419 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.361 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.621 00:19:09.622 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.622 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.622 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.882 { 00:19:09.882 "cntlid": 83, 00:19:09.882 "qid": 0, 00:19:09.882 "state": "enabled", 00:19:09.882 "thread": "nvmf_tgt_poll_group_000", 00:19:09.882 "listen_address": { 00:19:09.882 "trtype": "TCP", 00:19:09.882 "adrfam": "IPv4", 00:19:09.882 "traddr": "10.0.0.2", 00:19:09.882 "trsvcid": "4420" 00:19:09.882 }, 00:19:09.882 "peer_address": { 00:19:09.882 "trtype": "TCP", 00:19:09.882 "adrfam": "IPv4", 00:19:09.882 "traddr": "10.0.0.1", 00:19:09.882 "trsvcid": "54598" 00:19:09.882 }, 00:19:09.882 "auth": { 00:19:09.882 "state": "completed", 00:19:09.882 "digest": "sha384", 00:19:09.882 "dhgroup": "ffdhe6144" 00:19:09.882 } 00:19:09.882 } 00:19:09.882 ]' 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.882 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.882 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:10.143 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.143 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.143 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.143 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.143 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.086 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.086 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:11.347 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.608 { 00:19:11.608 "cntlid": 85, 00:19:11.608 "qid": 0, 00:19:11.608 "state": "enabled", 00:19:11.608 "thread": "nvmf_tgt_poll_group_000", 00:19:11.608 "listen_address": { 00:19:11.608 "trtype": "TCP", 00:19:11.608 "adrfam": "IPv4", 00:19:11.608 "traddr": "10.0.0.2", 00:19:11.608 "trsvcid": "4420" 00:19:11.608 }, 00:19:11.608 "peer_address": { 00:19:11.608 "trtype": "TCP", 00:19:11.608 "adrfam": "IPv4", 00:19:11.608 "traddr": "10.0.0.1", 00:19:11.608 "trsvcid": "54626" 00:19:11.608 }, 00:19:11.608 "auth": { 00:19:11.608 "state": "completed", 00:19:11.608 "digest": "sha384", 00:19:11.608 "dhgroup": "ffdhe6144" 00:19:11.608 } 00:19:11.608 } 00:19:11.608 ]' 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:11.608 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.869 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.869 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.869 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.869 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.813 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.074 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.335 { 00:19:13.335 "cntlid": 87, 00:19:13.335 "qid": 0, 00:19:13.335 "state": "enabled", 00:19:13.335 "thread": "nvmf_tgt_poll_group_000", 00:19:13.335 "listen_address": { 00:19:13.335 "trtype": "TCP", 00:19:13.335 "adrfam": "IPv4", 00:19:13.335 "traddr": "10.0.0.2", 00:19:13.335 "trsvcid": "4420" 00:19:13.335 }, 00:19:13.335 "peer_address": { 00:19:13.335 "trtype": "TCP", 00:19:13.335 "adrfam": "IPv4", 00:19:13.335 "traddr": "10.0.0.1", 00:19:13.335 "trsvcid": "54638" 00:19:13.335 }, 00:19:13.335 "auth": { 00:19:13.335 "state": "completed", 00:19:13.335 "digest": "sha384", 00:19:13.335 "dhgroup": "ffdhe6144" 00:19:13.335 } 00:19:13.335 } 00:19:13.335 ]' 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.335 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.597 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:13.597 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.597 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.597 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.597 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.597 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.566 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.138 00:19:15.138 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.138 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.138 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.399 { 00:19:15.399 "cntlid": 89, 00:19:15.399 "qid": 0, 00:19:15.399 "state": "enabled", 00:19:15.399 "thread": "nvmf_tgt_poll_group_000", 00:19:15.399 "listen_address": { 00:19:15.399 "trtype": "TCP", 00:19:15.399 "adrfam": "IPv4", 00:19:15.399 "traddr": "10.0.0.2", 00:19:15.399 "trsvcid": "4420" 00:19:15.399 }, 00:19:15.399 "peer_address": { 00:19:15.399 "trtype": "TCP", 00:19:15.399 "adrfam": "IPv4", 00:19:15.399 "traddr": "10.0.0.1", 00:19:15.399 "trsvcid": "54652" 00:19:15.399 }, 00:19:15.399 "auth": { 00:19:15.399 "state": "completed", 00:19:15.399 "digest": "sha384", 00:19:15.399 "dhgroup": "ffdhe8192" 00:19:15.399 } 00:19:15.399 } 00:19:15.399 ]' 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.399 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.660 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:16.232 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.494 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.494 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.066 00:19:17.066 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.066 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.066 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.327 { 00:19:17.327 "cntlid": 91, 00:19:17.327 "qid": 0, 00:19:17.327 "state": "enabled", 00:19:17.327 "thread": "nvmf_tgt_poll_group_000", 00:19:17.327 "listen_address": { 00:19:17.327 "trtype": "TCP", 00:19:17.327 "adrfam": "IPv4", 00:19:17.327 "traddr": "10.0.0.2", 00:19:17.327 "trsvcid": "4420" 00:19:17.327 }, 00:19:17.327 "peer_address": { 00:19:17.327 "trtype": "TCP", 00:19:17.327 "adrfam": "IPv4", 00:19:17.327 "traddr": "10.0.0.1", 00:19:17.327 "trsvcid": "35278" 00:19:17.327 }, 00:19:17.327 "auth": { 00:19:17.327 "state": "completed", 00:19:17.327 "digest": "sha384", 00:19:17.327 "dhgroup": "ffdhe8192" 00:19:17.327 } 00:19:17.327 } 00:19:17.327 ]' 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.327 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.588 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:18.531 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.103 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.103 { 00:19:19.103 "cntlid": 93, 00:19:19.103 "qid": 0, 00:19:19.103 "state": "enabled", 00:19:19.103 "thread": "nvmf_tgt_poll_group_000", 00:19:19.103 "listen_address": { 00:19:19.103 "trtype": "TCP", 00:19:19.103 "adrfam": "IPv4", 00:19:19.103 "traddr": "10.0.0.2", 00:19:19.103 "trsvcid": "4420" 00:19:19.103 }, 00:19:19.103 "peer_address": { 00:19:19.103 "trtype": "TCP", 00:19:19.103 "adrfam": "IPv4", 00:19:19.103 "traddr": "10.0.0.1", 00:19:19.103 "trsvcid": "35320" 00:19:19.103 }, 00:19:19.103 "auth": { 00:19:19.103 "state": "completed", 00:19:19.103 "digest": "sha384", 00:19:19.103 "dhgroup": "ffdhe8192" 00:19:19.103 } 00:19:19.103 } 00:19:19.103 ]' 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.103 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.365 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.365 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.365 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.365 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.365 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.365 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:20.307 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.308 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:20.880 00:19:20.880 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.880 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.880 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.141 { 00:19:21.141 "cntlid": 95, 00:19:21.141 "qid": 0, 00:19:21.141 "state": "enabled", 00:19:21.141 "thread": "nvmf_tgt_poll_group_000", 00:19:21.141 "listen_address": { 00:19:21.141 "trtype": "TCP", 00:19:21.141 "adrfam": "IPv4", 00:19:21.141 "traddr": "10.0.0.2", 00:19:21.141 "trsvcid": "4420" 00:19:21.141 }, 00:19:21.141 "peer_address": { 00:19:21.141 "trtype": "TCP", 00:19:21.141 "adrfam": "IPv4", 00:19:21.141 "traddr": "10.0.0.1", 00:19:21.141 "trsvcid": "35338" 00:19:21.141 }, 00:19:21.141 "auth": { 00:19:21.141 "state": "completed", 00:19:21.141 "digest": "sha384", 00:19:21.141 "dhgroup": "ffdhe8192" 00:19:21.141 } 00:19:21.141 } 00:19:21.141 ]' 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.141 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.403 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.347 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.608 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.608 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.869 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.869 { 00:19:22.869 "cntlid": 97, 00:19:22.869 "qid": 0, 00:19:22.869 "state": "enabled", 00:19:22.869 "thread": "nvmf_tgt_poll_group_000", 00:19:22.869 "listen_address": { 00:19:22.869 "trtype": "TCP", 00:19:22.869 "adrfam": "IPv4", 00:19:22.869 "traddr": "10.0.0.2", 00:19:22.869 "trsvcid": "4420" 00:19:22.869 }, 00:19:22.869 "peer_address": { 00:19:22.869 "trtype": "TCP", 00:19:22.869 "adrfam": "IPv4", 00:19:22.869 "traddr": "10.0.0.1", 00:19:22.869 "trsvcid": "35360" 00:19:22.869 }, 00:19:22.869 "auth": { 00:19:22.869 "state": "completed", 00:19:22.869 "digest": "sha512", 00:19:22.869 "dhgroup": "null" 00:19:22.869 } 00:19:22.869 } 00:19:22.869 ]' 00:19:22.869 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.869 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:22.869 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.869 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:22.869 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.870 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.870 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.870 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.131 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.705 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.966 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.226 00:19:24.226 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.226 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.226 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.485 { 00:19:24.485 "cntlid": 99, 00:19:24.485 "qid": 0, 00:19:24.485 "state": "enabled", 00:19:24.485 "thread": "nvmf_tgt_poll_group_000", 00:19:24.485 "listen_address": { 00:19:24.485 "trtype": "TCP", 00:19:24.485 "adrfam": "IPv4", 00:19:24.485 "traddr": "10.0.0.2", 00:19:24.485 "trsvcid": "4420" 00:19:24.485 }, 00:19:24.485 "peer_address": { 00:19:24.485 "trtype": "TCP", 00:19:24.485 "adrfam": "IPv4", 00:19:24.485 "traddr": "10.0.0.1", 00:19:24.485 "trsvcid": "35396" 00:19:24.485 }, 00:19:24.485 "auth": { 00:19:24.485 "state": "completed", 00:19:24.485 "digest": "sha512", 00:19:24.485 "dhgroup": "null" 00:19:24.485 } 00:19:24.485 } 00:19:24.485 ]' 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.485 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.745 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:25.317 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.581 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.841 00:19:25.841 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.841 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.841 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.101 { 00:19:26.101 "cntlid": 101, 00:19:26.101 "qid": 0, 00:19:26.101 "state": "enabled", 00:19:26.101 "thread": "nvmf_tgt_poll_group_000", 00:19:26.101 "listen_address": { 00:19:26.101 "trtype": "TCP", 00:19:26.101 "adrfam": "IPv4", 00:19:26.101 "traddr": "10.0.0.2", 00:19:26.101 "trsvcid": "4420" 00:19:26.101 }, 00:19:26.101 "peer_address": { 00:19:26.101 "trtype": "TCP", 00:19:26.101 "adrfam": "IPv4", 00:19:26.101 "traddr": "10.0.0.1", 00:19:26.101 "trsvcid": "48400" 00:19:26.101 }, 00:19:26.101 "auth": { 00:19:26.101 "state": "completed", 00:19:26.101 "digest": "sha512", 00:19:26.101 "dhgroup": "null" 00:19:26.101 } 00:19:26.101 } 00:19:26.101 ]' 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.101 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.361 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.303 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:27.564 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.564 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.564 { 00:19:27.564 "cntlid": 103, 00:19:27.564 "qid": 0, 00:19:27.564 "state": "enabled", 00:19:27.564 "thread": "nvmf_tgt_poll_group_000", 00:19:27.564 "listen_address": { 00:19:27.564 "trtype": "TCP", 00:19:27.564 "adrfam": "IPv4", 00:19:27.564 "traddr": "10.0.0.2", 00:19:27.564 "trsvcid": "4420" 00:19:27.564 }, 00:19:27.564 "peer_address": { 00:19:27.564 "trtype": "TCP", 00:19:27.564 "adrfam": "IPv4", 00:19:27.564 "traddr": "10.0.0.1", 00:19:27.564 "trsvcid": "48420" 00:19:27.564 }, 00:19:27.564 "auth": { 00:19:27.564 "state": "completed", 00:19:27.564 "digest": "sha512", 00:19:27.564 "dhgroup": "null" 00:19:27.564 } 00:19:27.564 } 00:19:27.564 ]' 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.824 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.084 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:28.655 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:28.915 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:29.174 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.174 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.435 { 00:19:29.435 "cntlid": 105, 00:19:29.435 "qid": 0, 00:19:29.435 "state": "enabled", 00:19:29.435 "thread": "nvmf_tgt_poll_group_000", 00:19:29.435 "listen_address": { 00:19:29.435 "trtype": "TCP", 00:19:29.435 "adrfam": "IPv4", 00:19:29.435 "traddr": "10.0.0.2", 00:19:29.435 "trsvcid": "4420" 00:19:29.435 }, 00:19:29.435 "peer_address": { 00:19:29.435 "trtype": "TCP", 00:19:29.435 "adrfam": "IPv4", 00:19:29.435 "traddr": "10.0.0.1", 00:19:29.435 "trsvcid": "48438" 00:19:29.435 }, 00:19:29.435 "auth": { 00:19:29.435 "state": "completed", 00:19:29.435 "digest": "sha512", 00:19:29.435 "dhgroup": "ffdhe2048" 00:19:29.435 } 00:19:29.435 } 00:19:29.435 ]' 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.435 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.696 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:30.269 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.269 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.269 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.269 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.531 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:30.792 00:19:30.792 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.792 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.792 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.053 { 00:19:31.053 "cntlid": 107, 00:19:31.053 "qid": 0, 00:19:31.053 "state": "enabled", 00:19:31.053 "thread": "nvmf_tgt_poll_group_000", 00:19:31.053 "listen_address": { 00:19:31.053 "trtype": "TCP", 00:19:31.053 "adrfam": "IPv4", 00:19:31.053 "traddr": "10.0.0.2", 00:19:31.053 "trsvcid": "4420" 00:19:31.053 }, 00:19:31.053 "peer_address": { 00:19:31.053 "trtype": "TCP", 00:19:31.053 "adrfam": "IPv4", 00:19:31.053 "traddr": "10.0.0.1", 00:19:31.053 "trsvcid": "48466" 00:19:31.053 }, 00:19:31.053 "auth": { 00:19:31.053 "state": "completed", 00:19:31.053 "digest": "sha512", 00:19:31.053 "dhgroup": "ffdhe2048" 00:19:31.053 } 00:19:31.053 } 00:19:31.053 ]' 00:19:31.053 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.053 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.321 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.925 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.186 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.447 00:19:32.448 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.448 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.448 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.709 { 00:19:32.709 "cntlid": 109, 00:19:32.709 "qid": 0, 00:19:32.709 "state": "enabled", 00:19:32.709 "thread": "nvmf_tgt_poll_group_000", 00:19:32.709 "listen_address": { 00:19:32.709 "trtype": "TCP", 00:19:32.709 "adrfam": "IPv4", 00:19:32.709 "traddr": "10.0.0.2", 00:19:32.709 "trsvcid": "4420" 00:19:32.709 }, 00:19:32.709 "peer_address": { 00:19:32.709 "trtype": "TCP", 00:19:32.709 "adrfam": "IPv4", 00:19:32.709 "traddr": "10.0.0.1", 00:19:32.709 "trsvcid": "48492" 00:19:32.709 }, 00:19:32.709 "auth": { 00:19:32.709 "state": "completed", 00:19:32.709 "digest": "sha512", 00:19:32.709 "dhgroup": "ffdhe2048" 00:19:32.709 } 00:19:32.709 } 00:19:32.709 ]' 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.709 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.969 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:33.541 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.803 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.065 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.065 00:19:34.065 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.065 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.065 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.326 { 00:19:34.326 "cntlid": 111, 00:19:34.326 "qid": 0, 00:19:34.326 "state": "enabled", 00:19:34.326 "thread": "nvmf_tgt_poll_group_000", 00:19:34.326 "listen_address": { 00:19:34.326 "trtype": "TCP", 00:19:34.326 "adrfam": "IPv4", 00:19:34.326 "traddr": "10.0.0.2", 00:19:34.326 "trsvcid": "4420" 00:19:34.326 }, 00:19:34.326 "peer_address": { 00:19:34.326 "trtype": "TCP", 00:19:34.326 "adrfam": "IPv4", 00:19:34.326 "traddr": "10.0.0.1", 00:19:34.326 "trsvcid": "48522" 00:19:34.326 }, 00:19:34.326 "auth": { 00:19:34.326 "state": "completed", 00:19:34.326 "digest": "sha512", 00:19:34.326 "dhgroup": "ffdhe2048" 00:19:34.326 } 00:19:34.326 } 00:19:34.326 ]' 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.326 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.588 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.588 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.588 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.588 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.530 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.791 00:19:35.791 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.791 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.791 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.051 { 00:19:36.051 "cntlid": 113, 00:19:36.051 "qid": 0, 00:19:36.051 "state": "enabled", 00:19:36.051 "thread": "nvmf_tgt_poll_group_000", 00:19:36.051 "listen_address": { 00:19:36.051 "trtype": "TCP", 00:19:36.051 "adrfam": "IPv4", 00:19:36.051 "traddr": "10.0.0.2", 00:19:36.051 "trsvcid": "4420" 00:19:36.051 }, 00:19:36.051 "peer_address": { 00:19:36.051 "trtype": "TCP", 00:19:36.051 "adrfam": "IPv4", 00:19:36.051 "traddr": "10.0.0.1", 00:19:36.051 "trsvcid": "59836" 00:19:36.051 }, 00:19:36.051 "auth": { 00:19:36.051 "state": "completed", 00:19:36.051 "digest": "sha512", 00:19:36.051 "dhgroup": "ffdhe3072" 00:19:36.051 } 00:19:36.051 } 00:19:36.051 ]' 00:19:36.051 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.051 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.312 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:37.254 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.255 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.255 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.516 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.516 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.516 { 00:19:37.516 "cntlid": 115, 00:19:37.516 "qid": 0, 00:19:37.516 "state": "enabled", 00:19:37.516 "thread": "nvmf_tgt_poll_group_000", 00:19:37.516 "listen_address": { 00:19:37.516 "trtype": "TCP", 00:19:37.516 "adrfam": "IPv4", 00:19:37.516 "traddr": "10.0.0.2", 00:19:37.516 "trsvcid": "4420" 00:19:37.516 }, 00:19:37.516 "peer_address": { 00:19:37.516 "trtype": "TCP", 00:19:37.516 "adrfam": "IPv4", 00:19:37.516 "traddr": "10.0.0.1", 00:19:37.516 "trsvcid": "59854" 00:19:37.516 }, 00:19:37.516 "auth": { 00:19:37.516 "state": "completed", 00:19:37.516 "digest": "sha512", 00:19:37.516 "dhgroup": "ffdhe3072" 00:19:37.516 } 00:19:37.516 } 00:19:37.516 ]' 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.777 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.038 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.610 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.872 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.133 00:19:39.133 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:39.133 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:39.133 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:39.394 { 00:19:39.394 "cntlid": 117, 00:19:39.394 "qid": 0, 00:19:39.394 "state": "enabled", 00:19:39.394 "thread": "nvmf_tgt_poll_group_000", 00:19:39.394 "listen_address": { 00:19:39.394 "trtype": "TCP", 00:19:39.394 "adrfam": "IPv4", 00:19:39.394 "traddr": "10.0.0.2", 00:19:39.394 "trsvcid": "4420" 00:19:39.394 }, 00:19:39.394 "peer_address": { 00:19:39.394 "trtype": "TCP", 00:19:39.394 "adrfam": "IPv4", 00:19:39.394 "traddr": "10.0.0.1", 00:19:39.394 "trsvcid": "59886" 00:19:39.394 }, 00:19:39.394 "auth": { 00:19:39.394 "state": "completed", 00:19:39.394 "digest": "sha512", 00:19:39.394 "dhgroup": "ffdhe3072" 00:19:39.394 } 00:19:39.394 } 00:19:39.394 ]' 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.394 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.655 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:40.226 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.487 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.487 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.748 00:19:40.748 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.748 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.748 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.009 { 00:19:41.009 "cntlid": 119, 00:19:41.009 "qid": 0, 00:19:41.009 "state": "enabled", 00:19:41.009 "thread": "nvmf_tgt_poll_group_000", 00:19:41.009 "listen_address": { 00:19:41.009 "trtype": "TCP", 00:19:41.009 "adrfam": "IPv4", 00:19:41.009 "traddr": "10.0.0.2", 00:19:41.009 "trsvcid": "4420" 00:19:41.009 }, 00:19:41.009 "peer_address": { 00:19:41.009 "trtype": "TCP", 00:19:41.009 "adrfam": "IPv4", 00:19:41.009 "traddr": "10.0.0.1", 00:19:41.009 "trsvcid": "59916" 00:19:41.009 }, 00:19:41.009 "auth": { 00:19:41.009 "state": "completed", 00:19:41.009 "digest": "sha512", 00:19:41.009 "dhgroup": "ffdhe3072" 00:19:41.009 } 00:19:41.009 } 00:19:41.009 ]' 00:19:41.009 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.009 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.270 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.213 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.214 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.475 00:19:42.475 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.475 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.475 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.475 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.475 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.475 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.736 { 00:19:42.736 "cntlid": 121, 00:19:42.736 "qid": 0, 00:19:42.736 "state": "enabled", 00:19:42.736 "thread": "nvmf_tgt_poll_group_000", 00:19:42.736 "listen_address": { 00:19:42.736 "trtype": "TCP", 00:19:42.736 "adrfam": "IPv4", 00:19:42.736 "traddr": "10.0.0.2", 00:19:42.736 "trsvcid": "4420" 00:19:42.736 }, 00:19:42.736 "peer_address": { 00:19:42.736 "trtype": "TCP", 00:19:42.736 "adrfam": "IPv4", 00:19:42.736 "traddr": "10.0.0.1", 00:19:42.736 "trsvcid": "59936" 00:19:42.736 }, 00:19:42.736 "auth": { 00:19:42.736 "state": "completed", 00:19:42.736 "digest": "sha512", 00:19:42.736 "dhgroup": "ffdhe4096" 00:19:42.736 } 00:19:42.736 } 00:19:42.736 ]' 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.736 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.997 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.569 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:43.830 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.091 00:19:44.091 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.091 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.091 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.352 { 00:19:44.352 "cntlid": 123, 00:19:44.352 "qid": 0, 00:19:44.352 "state": "enabled", 00:19:44.352 "thread": "nvmf_tgt_poll_group_000", 00:19:44.352 "listen_address": { 00:19:44.352 "trtype": "TCP", 00:19:44.352 "adrfam": "IPv4", 00:19:44.352 "traddr": "10.0.0.2", 00:19:44.352 "trsvcid": "4420" 00:19:44.352 }, 00:19:44.352 "peer_address": { 00:19:44.352 "trtype": "TCP", 00:19:44.352 "adrfam": "IPv4", 00:19:44.352 "traddr": "10.0.0.1", 00:19:44.352 "trsvcid": "59974" 00:19:44.352 }, 00:19:44.352 "auth": { 00:19:44.352 "state": "completed", 00:19:44.352 "digest": "sha512", 00:19:44.352 "dhgroup": "ffdhe4096" 00:19:44.352 } 00:19:44.352 } 00:19:44.352 ]' 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.352 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.613 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:45.184 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.445 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.445 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.446 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.707 00:19:45.707 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.707 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.707 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.968 { 00:19:45.968 "cntlid": 125, 00:19:45.968 "qid": 0, 00:19:45.968 "state": "enabled", 00:19:45.968 "thread": "nvmf_tgt_poll_group_000", 00:19:45.968 "listen_address": { 00:19:45.968 "trtype": "TCP", 00:19:45.968 "adrfam": "IPv4", 00:19:45.968 "traddr": "10.0.0.2", 00:19:45.968 "trsvcid": "4420" 00:19:45.968 }, 00:19:45.968 "peer_address": { 00:19:45.968 "trtype": "TCP", 00:19:45.968 "adrfam": "IPv4", 00:19:45.968 "traddr": "10.0.0.1", 00:19:45.968 "trsvcid": "48942" 00:19:45.968 }, 00:19:45.968 "auth": { 00:19:45.968 "state": "completed", 00:19:45.968 "digest": "sha512", 00:19:45.968 "dhgroup": "ffdhe4096" 00:19:45.968 } 00:19:45.968 } 00:19:45.968 ]' 00:19:45.968 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.968 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.968 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.968 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.968 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.229 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.229 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.229 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.229 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.171 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.172 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.432 00:19:47.432 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.432 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.433 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.694 { 00:19:47.694 "cntlid": 127, 00:19:47.694 "qid": 0, 00:19:47.694 "state": "enabled", 00:19:47.694 "thread": "nvmf_tgt_poll_group_000", 00:19:47.694 "listen_address": { 00:19:47.694 "trtype": "TCP", 00:19:47.694 "adrfam": "IPv4", 00:19:47.694 "traddr": "10.0.0.2", 00:19:47.694 "trsvcid": "4420" 00:19:47.694 }, 00:19:47.694 "peer_address": { 00:19:47.694 "trtype": "TCP", 00:19:47.694 "adrfam": "IPv4", 00:19:47.694 "traddr": "10.0.0.1", 00:19:47.694 "trsvcid": "48972" 00:19:47.694 }, 00:19:47.694 "auth": { 00:19:47.694 "state": "completed", 00:19:47.694 "digest": "sha512", 00:19:47.694 "dhgroup": "ffdhe4096" 00:19:47.694 } 00:19:47.694 } 00:19:47.694 ]' 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.694 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.955 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:48.942 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.203 00:19:49.203 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.203 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.203 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.465 { 00:19:49.465 "cntlid": 129, 00:19:49.465 "qid": 0, 00:19:49.465 "state": "enabled", 00:19:49.465 "thread": "nvmf_tgt_poll_group_000", 00:19:49.465 "listen_address": { 00:19:49.465 "trtype": "TCP", 00:19:49.465 "adrfam": "IPv4", 00:19:49.465 "traddr": "10.0.0.2", 00:19:49.465 "trsvcid": "4420" 00:19:49.465 }, 00:19:49.465 "peer_address": { 00:19:49.465 "trtype": "TCP", 00:19:49.465 "adrfam": "IPv4", 00:19:49.465 "traddr": "10.0.0.1", 00:19:49.465 "trsvcid": "49012" 00:19:49.465 }, 00:19:49.465 "auth": { 00:19:49.465 "state": "completed", 00:19:49.465 "digest": "sha512", 00:19:49.465 "dhgroup": "ffdhe6144" 00:19:49.465 } 00:19:49.465 } 00:19:49.465 ]' 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.465 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.726 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.670 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:50.932 00:19:50.932 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:50.932 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:50.932 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.193 { 00:19:51.193 "cntlid": 131, 00:19:51.193 "qid": 0, 00:19:51.193 "state": "enabled", 00:19:51.193 "thread": "nvmf_tgt_poll_group_000", 00:19:51.193 "listen_address": { 00:19:51.193 "trtype": "TCP", 00:19:51.193 "adrfam": "IPv4", 00:19:51.193 "traddr": "10.0.0.2", 00:19:51.193 "trsvcid": "4420" 00:19:51.193 }, 00:19:51.193 "peer_address": { 00:19:51.193 "trtype": "TCP", 00:19:51.193 "adrfam": "IPv4", 00:19:51.193 "traddr": "10.0.0.1", 00:19:51.193 "trsvcid": "49046" 00:19:51.193 }, 00:19:51.193 "auth": { 00:19:51.193 "state": "completed", 00:19:51.193 "digest": "sha512", 00:19:51.193 "dhgroup": "ffdhe6144" 00:19:51.193 } 00:19:51.193 } 00:19:51.193 ]' 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.193 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.453 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.453 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.396 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:52.657 00:19:52.657 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.657 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.657 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.918 { 00:19:52.918 "cntlid": 133, 00:19:52.918 "qid": 0, 00:19:52.918 "state": "enabled", 00:19:52.918 "thread": "nvmf_tgt_poll_group_000", 00:19:52.918 "listen_address": { 00:19:52.918 "trtype": "TCP", 00:19:52.918 "adrfam": "IPv4", 00:19:52.918 "traddr": "10.0.0.2", 00:19:52.918 "trsvcid": "4420" 00:19:52.918 }, 00:19:52.918 "peer_address": { 00:19:52.918 "trtype": "TCP", 00:19:52.918 "adrfam": "IPv4", 00:19:52.918 "traddr": "10.0.0.1", 00:19:52.918 "trsvcid": "49068" 00:19:52.918 }, 00:19:52.918 "auth": { 00:19:52.918 "state": "completed", 00:19:52.918 "digest": "sha512", 00:19:52.918 "dhgroup": "ffdhe6144" 00:19:52.918 } 00:19:52.918 } 00:19:52.918 ]' 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.918 10:08:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.918 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:52.918 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.179 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.179 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.179 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.179 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:19:54.119 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.119 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.120 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:54.691 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.691 { 00:19:54.691 "cntlid": 135, 00:19:54.691 "qid": 0, 00:19:54.691 "state": "enabled", 00:19:54.691 "thread": "nvmf_tgt_poll_group_000", 00:19:54.691 "listen_address": { 00:19:54.691 "trtype": "TCP", 00:19:54.691 "adrfam": "IPv4", 00:19:54.691 "traddr": "10.0.0.2", 00:19:54.691 "trsvcid": "4420" 00:19:54.691 }, 00:19:54.691 "peer_address": { 00:19:54.691 "trtype": "TCP", 00:19:54.691 "adrfam": "IPv4", 00:19:54.691 "traddr": "10.0.0.1", 00:19:54.691 "trsvcid": "49082" 00:19:54.691 }, 00:19:54.691 "auth": { 00:19:54.691 "state": "completed", 00:19:54.691 "digest": "sha512", 00:19:54.691 "dhgroup": "ffdhe6144" 00:19:54.691 } 00:19:54.691 } 00:19:54.691 ]' 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:54.691 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.952 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.952 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.952 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.952 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.896 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.469 00:19:56.469 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.469 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.469 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.730 { 00:19:56.730 "cntlid": 137, 00:19:56.730 "qid": 0, 00:19:56.730 "state": "enabled", 00:19:56.730 "thread": "nvmf_tgt_poll_group_000", 00:19:56.730 "listen_address": { 00:19:56.730 "trtype": "TCP", 00:19:56.730 "adrfam": "IPv4", 00:19:56.730 "traddr": "10.0.0.2", 00:19:56.730 "trsvcid": "4420" 00:19:56.730 }, 00:19:56.730 "peer_address": { 00:19:56.730 "trtype": "TCP", 00:19:56.730 "adrfam": "IPv4", 00:19:56.730 "traddr": "10.0.0.1", 00:19:56.730 "trsvcid": "35112" 00:19:56.730 }, 00:19:56.730 "auth": { 00:19:56.730 "state": "completed", 00:19:56.730 "digest": "sha512", 00:19:56.730 "dhgroup": "ffdhe8192" 00:19:56.730 } 00:19:56.730 } 00:19:56.730 ]' 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.730 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.731 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:56.731 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.731 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.731 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.731 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.992 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.564 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.825 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.826 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.396 00:19:58.396 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.396 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.396 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.656 { 00:19:58.656 "cntlid": 139, 00:19:58.656 "qid": 0, 00:19:58.656 "state": "enabled", 00:19:58.656 "thread": "nvmf_tgt_poll_group_000", 00:19:58.656 "listen_address": { 00:19:58.656 "trtype": "TCP", 00:19:58.656 "adrfam": "IPv4", 00:19:58.656 "traddr": "10.0.0.2", 00:19:58.656 "trsvcid": "4420" 00:19:58.656 }, 00:19:58.656 "peer_address": { 00:19:58.656 "trtype": "TCP", 00:19:58.656 "adrfam": "IPv4", 00:19:58.656 "traddr": "10.0.0.1", 00:19:58.656 "trsvcid": "35134" 00:19:58.656 }, 00:19:58.656 "auth": { 00:19:58.656 "state": "completed", 00:19:58.656 "digest": "sha512", 00:19:58.656 "dhgroup": "ffdhe8192" 00:19:58.656 } 00:19:58.656 } 00:19:58.656 ]' 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.656 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.917 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YjVkODgwNWE4NWI5NDMyZjk4ZTUwZDc2NGE5MGFhMmF37J8s: --dhchap-ctrl-secret DHHC-1:02:M2YyZTg3ZTJiODAyYjlmMTRmNDc2YTlkMzM0MDQzYzliYjc1ZDU5ZWRhM2NjMTI4n7Z1zA==: 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:59.859 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.429 00:20:00.429 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.429 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.429 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.690 { 00:20:00.690 "cntlid": 141, 00:20:00.690 "qid": 0, 00:20:00.690 "state": "enabled", 00:20:00.690 "thread": "nvmf_tgt_poll_group_000", 00:20:00.690 "listen_address": { 00:20:00.690 "trtype": "TCP", 00:20:00.690 "adrfam": "IPv4", 00:20:00.690 "traddr": "10.0.0.2", 00:20:00.690 "trsvcid": "4420" 00:20:00.690 }, 00:20:00.690 "peer_address": { 00:20:00.690 "trtype": "TCP", 00:20:00.690 "adrfam": "IPv4", 00:20:00.690 "traddr": "10.0.0.1", 00:20:00.690 "trsvcid": "35168" 00:20:00.690 }, 00:20:00.690 "auth": { 00:20:00.690 "state": "completed", 00:20:00.690 "digest": "sha512", 00:20:00.690 "dhgroup": "ffdhe8192" 00:20:00.690 } 00:20:00.690 } 00:20:00.690 ]' 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.690 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.951 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:Mjg4YmRiMWI1MGU0YmNjOWY5MGJhODMyODhmM2NiNzVhMDFjODY4MGY1NWQ0Zjk5ukGgZA==: --dhchap-ctrl-secret DHHC-1:01:MDBlODI1OTNmNjI0NTdkMDE5YTEyYTZhMzgyNjI1YzKJOULI: 00:20:01.521 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.521 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.521 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.521 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.782 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.351 00:20:02.351 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.351 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.351 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.611 { 00:20:02.611 "cntlid": 143, 00:20:02.611 "qid": 0, 00:20:02.611 "state": "enabled", 00:20:02.611 "thread": "nvmf_tgt_poll_group_000", 00:20:02.611 "listen_address": { 00:20:02.611 "trtype": "TCP", 00:20:02.611 "adrfam": "IPv4", 00:20:02.611 "traddr": "10.0.0.2", 00:20:02.611 "trsvcid": "4420" 00:20:02.611 }, 00:20:02.611 "peer_address": { 00:20:02.611 "trtype": "TCP", 00:20:02.611 "adrfam": "IPv4", 00:20:02.611 "traddr": "10.0.0.1", 00:20:02.611 "trsvcid": "35196" 00:20:02.611 }, 00:20:02.611 "auth": { 00:20:02.611 "state": "completed", 00:20:02.611 "digest": "sha512", 00:20:02.611 "dhgroup": "ffdhe8192" 00:20:02.611 } 00:20:02.611 } 00:20:02.611 ]' 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.611 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.612 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.612 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.872 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:20:03.443 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:03.704 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.274 00:20:04.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.534 { 00:20:04.534 "cntlid": 145, 00:20:04.534 "qid": 0, 00:20:04.534 "state": "enabled", 00:20:04.534 "thread": "nvmf_tgt_poll_group_000", 00:20:04.534 "listen_address": { 00:20:04.534 "trtype": "TCP", 00:20:04.534 "adrfam": "IPv4", 00:20:04.534 "traddr": "10.0.0.2", 00:20:04.534 "trsvcid": "4420" 00:20:04.534 }, 00:20:04.534 "peer_address": { 00:20:04.534 "trtype": "TCP", 00:20:04.534 "adrfam": "IPv4", 00:20:04.534 "traddr": "10.0.0.1", 00:20:04.534 "trsvcid": "35224" 00:20:04.534 }, 00:20:04.534 "auth": { 00:20:04.534 "state": "completed", 00:20:04.534 "digest": "sha512", 00:20:04.534 "dhgroup": "ffdhe8192" 00:20:04.534 } 00:20:04.534 } 00:20:04.534 ]' 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.534 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.795 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:MzdjMWZjOGM4NWZiYTk5NTA4ZTA2NGMyYzgwNDA5ODI2OGIwZDNkMTg1NGMwYzMz4F5NgA==: --dhchap-ctrl-secret DHHC-1:03:ZmY3YjY2MDA4YWYxMGYxZDgxYzczODFmNTNlYmIwNjM5YTQ5MjgyMDBiZWVjNzgzMGJmMTBiNTRiODkwNzExYas1N+Y=: 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.737 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:05.998 request: 00:20:05.998 { 00:20:05.998 "name": "nvme0", 00:20:05.998 "trtype": "tcp", 00:20:05.998 "traddr": "10.0.0.2", 00:20:05.998 "adrfam": "ipv4", 00:20:05.998 "trsvcid": "4420", 00:20:05.998 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:05.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:05.998 "prchk_reftag": false, 00:20:05.998 "prchk_guard": false, 00:20:05.998 "hdgst": false, 00:20:05.998 "ddgst": false, 00:20:05.998 "dhchap_key": "key2", 00:20:05.998 "method": "bdev_nvme_attach_controller", 00:20:05.998 "req_id": 1 00:20:05.998 } 00:20:05.998 Got JSON-RPC error response 00:20:05.998 response: 00:20:05.998 { 00:20:05.998 "code": -5, 00:20:05.998 "message": "Input/output error" 00:20:05.998 } 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:05.998 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:06.613 request: 00:20:06.613 { 00:20:06.613 "name": "nvme0", 00:20:06.613 "trtype": "tcp", 00:20:06.613 "traddr": "10.0.0.2", 00:20:06.613 "adrfam": "ipv4", 00:20:06.613 "trsvcid": "4420", 00:20:06.613 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.613 "prchk_reftag": false, 00:20:06.613 "prchk_guard": false, 00:20:06.613 "hdgst": false, 00:20:06.613 "ddgst": false, 00:20:06.613 "dhchap_key": "key1", 00:20:06.613 "dhchap_ctrlr_key": "ckey2", 00:20:06.613 "method": "bdev_nvme_attach_controller", 00:20:06.613 "req_id": 1 00:20:06.613 } 00:20:06.613 Got JSON-RPC error response 00:20:06.613 response: 00:20:06.613 { 00:20:06.613 "code": -5, 00:20:06.613 "message": "Input/output error" 00:20:06.613 } 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.613 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:06.874 request: 00:20:06.874 { 00:20:06.874 "name": "nvme0", 00:20:06.874 "trtype": "tcp", 00:20:06.874 "traddr": "10.0.0.2", 00:20:06.874 "adrfam": "ipv4", 00:20:06.874 "trsvcid": "4420", 00:20:06.874 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:06.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:06.874 "prchk_reftag": false, 00:20:06.874 "prchk_guard": false, 00:20:06.874 "hdgst": false, 00:20:06.874 "ddgst": false, 00:20:06.874 "dhchap_key": "key1", 00:20:06.874 "dhchap_ctrlr_key": "ckey1", 00:20:06.874 "method": "bdev_nvme_attach_controller", 00:20:06.874 "req_id": 1 00:20:06.874 } 00:20:06.874 Got JSON-RPC error response 00:20:06.874 response: 00:20:06.874 { 00:20:06.874 "code": -5, 00:20:06.874 "message": "Input/output error" 00:20:06.874 } 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1284239 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1284239 ']' 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1284239 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1284239 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1284239' 00:20:07.135 killing process with pid 1284239 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1284239 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1284239 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1310741 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1310741 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1310741 ']' 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.135 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1310741 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1310741 ']' 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:08.078 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.339 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.911 00:20:08.911 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.911 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.911 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.172 { 00:20:09.172 "cntlid": 1, 00:20:09.172 "qid": 0, 00:20:09.172 "state": "enabled", 00:20:09.172 "thread": "nvmf_tgt_poll_group_000", 00:20:09.172 "listen_address": { 00:20:09.172 "trtype": "TCP", 00:20:09.172 "adrfam": "IPv4", 00:20:09.172 "traddr": "10.0.0.2", 00:20:09.172 "trsvcid": "4420" 00:20:09.172 }, 00:20:09.172 "peer_address": { 00:20:09.172 "trtype": "TCP", 00:20:09.172 "adrfam": "IPv4", 00:20:09.172 "traddr": "10.0.0.1", 00:20:09.172 "trsvcid": "42288" 00:20:09.172 }, 00:20:09.172 "auth": { 00:20:09.172 "state": "completed", 00:20:09.172 "digest": "sha512", 00:20:09.172 "dhgroup": "ffdhe8192" 00:20:09.172 } 00:20:09.172 } 00:20:09.172 ]' 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.172 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.434 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:Y2JkOGUzMTAzODU1NDQ5YmFjNmMyYzYzZDdhMTg1NDZiYzRkNTUyMzQ1OTY3ZTFkYmIwNDA1MTlkZTgzNWZmZF+5mX8=: 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:10.008 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.008 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.008 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.008 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:10.008 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.269 request: 00:20:10.269 { 00:20:10.269 "name": "nvme0", 00:20:10.269 "trtype": "tcp", 00:20:10.269 "traddr": "10.0.0.2", 00:20:10.269 "adrfam": "ipv4", 00:20:10.269 "trsvcid": "4420", 00:20:10.269 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.269 "prchk_reftag": false, 00:20:10.269 "prchk_guard": false, 00:20:10.269 "hdgst": false, 00:20:10.269 "ddgst": false, 00:20:10.269 "dhchap_key": "key3", 00:20:10.269 "method": "bdev_nvme_attach_controller", 00:20:10.269 "req_id": 1 00:20:10.269 } 00:20:10.269 Got JSON-RPC error response 00:20:10.269 response: 00:20:10.269 { 00:20:10.269 "code": -5, 00:20:10.269 "message": "Input/output error" 00:20:10.269 } 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.269 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.531 request: 00:20:10.531 { 00:20:10.531 "name": "nvme0", 00:20:10.531 "trtype": "tcp", 00:20:10.531 "traddr": "10.0.0.2", 00:20:10.531 "adrfam": "ipv4", 00:20:10.531 "trsvcid": "4420", 00:20:10.531 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:10.531 "prchk_reftag": false, 00:20:10.531 "prchk_guard": false, 00:20:10.531 "hdgst": false, 00:20:10.531 "ddgst": false, 00:20:10.531 "dhchap_key": "key3", 00:20:10.531 "method": "bdev_nvme_attach_controller", 00:20:10.531 "req_id": 1 00:20:10.531 } 00:20:10.531 Got JSON-RPC error response 00:20:10.531 response: 00:20:10.531 { 00:20:10.531 "code": -5, 00:20:10.531 "message": "Input/output error" 00:20:10.531 } 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.531 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.793 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.055 request: 00:20:11.055 { 00:20:11.055 "name": "nvme0", 00:20:11.055 "trtype": "tcp", 00:20:11.055 "traddr": "10.0.0.2", 00:20:11.055 "adrfam": "ipv4", 00:20:11.055 "trsvcid": "4420", 00:20:11.055 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:11.055 "prchk_reftag": false, 00:20:11.055 "prchk_guard": false, 00:20:11.055 "hdgst": false, 00:20:11.055 "ddgst": false, 00:20:11.055 "dhchap_key": "key0", 00:20:11.055 "dhchap_ctrlr_key": "key1", 00:20:11.055 "method": "bdev_nvme_attach_controller", 00:20:11.055 "req_id": 1 00:20:11.055 } 00:20:11.055 Got JSON-RPC error response 00:20:11.055 response: 00:20:11.055 { 00:20:11.055 "code": -5, 00:20:11.055 "message": "Input/output error" 00:20:11.055 } 00:20:11.055 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:11.055 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:11.055 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:11.055 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:11.055 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:11.055 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:11.316 00:20:11.316 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:11.316 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:11.316 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.316 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.316 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.317 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1284365 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1284365 ']' 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1284365 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1284365 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1284365' 00:20:11.578 killing process with pid 1284365 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1284365 00:20:11.578 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1284365 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.840 rmmod nvme_tcp 00:20:11.840 rmmod nvme_fabrics 00:20:11.840 rmmod nvme_keyring 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1310741 ']' 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1310741 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1310741 ']' 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1310741 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1310741 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1310741' 00:20:11.840 killing process with pid 1310741 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1310741 00:20:11.840 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1310741 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.102 10:08:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.018 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:14.018 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.F0J /tmp/spdk.key-sha256.C8q /tmp/spdk.key-sha384.Kka /tmp/spdk.key-sha512.5lj /tmp/spdk.key-sha512.L4c /tmp/spdk.key-sha384.gVt /tmp/spdk.key-sha256.FZs '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:14.018 00:20:14.018 real 2m24.204s 00:20:14.018 user 5m20.528s 00:20:14.018 sys 0m21.307s 00:20:14.018 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:14.018 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.018 ************************************ 00:20:14.018 END TEST nvmf_auth_target 00:20:14.018 ************************************ 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:14.281 ************************************ 00:20:14.281 START TEST nvmf_bdevio_no_huge 00:20:14.281 ************************************ 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:14.281 * Looking for test storage... 00:20:14.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:14.281 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:14.282 10:08:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:20.876 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:20.876 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:20.876 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:20.876 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:20.876 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.876 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.137 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:20:21.398 00:20:21.398 --- 10.0.0.2 ping statistics --- 00:20:21.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.398 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:20:21.398 00:20:21.398 --- 10.0.0.1 ping statistics --- 00:20:21.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.398 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1315819 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1315819 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1315819 ']' 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:21.398 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:21.398 [2024-07-25 10:09:00.447926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:21.398 [2024-07-25 10:09:00.447996] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:21.659 [2024-07-25 10:09:00.539606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.659 [2024-07-25 10:09:00.648355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.659 [2024-07-25 10:09:00.648411] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.659 [2024-07-25 10:09:00.648420] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.659 [2024-07-25 10:09:00.648431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.659 [2024-07-25 10:09:00.648438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.659 [2024-07-25 10:09:00.648606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:21.659 [2024-07-25 10:09:00.648764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:21.659 [2024-07-25 10:09:00.648923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.659 [2024-07-25 10:09:00.648924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.232 [2024-07-25 10:09:01.307063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.232 Malloc0 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.232 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:22.232 [2024-07-25 10:09:01.360658] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:22.493 { 00:20:22.493 "params": { 00:20:22.493 "name": "Nvme$subsystem", 00:20:22.493 "trtype": "$TEST_TRANSPORT", 00:20:22.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:22.493 "adrfam": "ipv4", 00:20:22.493 "trsvcid": "$NVMF_PORT", 00:20:22.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:22.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:22.493 "hdgst": ${hdgst:-false}, 00:20:22.493 "ddgst": ${ddgst:-false} 00:20:22.493 }, 00:20:22.493 "method": "bdev_nvme_attach_controller" 00:20:22.493 } 00:20:22.493 EOF 00:20:22.493 )") 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:22.493 10:09:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:22.493 "params": { 00:20:22.493 "name": "Nvme1", 00:20:22.493 "trtype": "tcp", 00:20:22.493 "traddr": "10.0.0.2", 00:20:22.493 "adrfam": "ipv4", 00:20:22.493 "trsvcid": "4420", 00:20:22.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.493 "hdgst": false, 00:20:22.493 "ddgst": false 00:20:22.493 }, 00:20:22.493 "method": "bdev_nvme_attach_controller" 00:20:22.493 }' 00:20:22.493 [2024-07-25 10:09:01.417493] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:22.493 [2024-07-25 10:09:01.417561] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1316054 ] 00:20:22.493 [2024-07-25 10:09:01.486146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:22.493 [2024-07-25 10:09:01.582883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.493 [2024-07-25 10:09:01.583001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.493 [2024-07-25 10:09:01.583005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.754 I/O targets: 00:20:22.754 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:22.754 00:20:22.754 00:20:22.754 CUnit - A unit testing framework for C - Version 2.1-3 00:20:22.754 http://cunit.sourceforge.net/ 00:20:22.754 00:20:22.754 00:20:22.754 Suite: bdevio tests on: Nvme1n1 00:20:22.754 Test: blockdev write read block ...passed 00:20:22.754 Test: blockdev write zeroes read block ...passed 00:20:22.754 Test: blockdev write zeroes read no split ...passed 00:20:22.754 Test: blockdev write zeroes read split ...passed 00:20:23.014 Test: blockdev write zeroes read split partial ...passed 00:20:23.015 Test: blockdev reset ...[2024-07-25 10:09:01.932682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:23.015 [2024-07-25 10:09:01.932743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xee7c10 (9): Bad file descriptor 00:20:23.015 [2024-07-25 10:09:01.950135] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:23.015 passed 00:20:23.015 Test: blockdev write read 8 blocks ...passed 00:20:23.015 Test: blockdev write read size > 128k ...passed 00:20:23.015 Test: blockdev write read invalid size ...passed 00:20:23.015 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:23.015 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:23.015 Test: blockdev write read max offset ...passed 00:20:23.015 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:23.275 Test: blockdev writev readv 8 blocks ...passed 00:20:23.275 Test: blockdev writev readv 30 x 1block ...passed 00:20:23.275 Test: blockdev writev readv block ...passed 00:20:23.275 Test: blockdev writev readv size > 128k ...passed 00:20:23.275 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:23.275 Test: blockdev comparev and writev ...[2024-07-25 10:09:02.224932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.224961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.224972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.224978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.225634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.225643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.225652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.225658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.226273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.226281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.226290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.226296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.226914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.226922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.226932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:23.275 [2024-07-25 10:09:02.226937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:23.275 passed 00:20:23.275 Test: blockdev nvme passthru rw ...passed 00:20:23.275 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:09:02.312323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.275 [2024-07-25 10:09:02.312333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.312796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.275 [2024-07-25 10:09:02.312803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.313291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.275 [2024-07-25 10:09:02.313299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:23.275 [2024-07-25 10:09:02.313789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.275 [2024-07-25 10:09:02.313796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:23.275 passed 00:20:23.275 Test: blockdev nvme admin passthru ...passed 00:20:23.275 Test: blockdev copy ...passed 00:20:23.275 00:20:23.275 Run Summary: Type Total Ran Passed Failed Inactive 00:20:23.275 suites 1 1 n/a 0 0 00:20:23.275 tests 23 23 23 0 0 00:20:23.275 asserts 152 152 152 0 n/a 00:20:23.275 00:20:23.275 Elapsed time = 1.300 seconds 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.535 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.535 rmmod nvme_tcp 00:20:23.796 rmmod nvme_fabrics 00:20:23.796 rmmod nvme_keyring 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1315819 ']' 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1315819 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1315819 ']' 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1315819 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1315819 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1315819' 00:20:23.796 killing process with pid 1315819 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1315819 00:20:23.796 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1315819 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.057 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.673 00:20:26.673 real 0m12.017s 00:20:26.673 user 0m13.675s 00:20:26.673 sys 0m6.266s 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:26.673 ************************************ 00:20:26.673 END TEST nvmf_bdevio_no_huge 00:20:26.673 ************************************ 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.673 ************************************ 00:20:26.673 START TEST nvmf_tls 00:20:26.673 ************************************ 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:26.673 * Looking for test storage... 00:20:26.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.673 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.674 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:33.272 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:33.272 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:33.272 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:33.272 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.272 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.273 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.534 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:20:33.534 00:20:33.534 --- 10.0.0.2 ping statistics --- 00:20:33.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.534 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:20:33.535 00:20:33.535 --- 10.0.0.1 ping statistics --- 00:20:33.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.535 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1321049 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1321049 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1321049 ']' 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:33.535 10:09:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.796 [2024-07-25 10:09:12.688711] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:33.796 [2024-07-25 10:09:12.688777] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.796 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.796 [2024-07-25 10:09:12.776824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.797 [2024-07-25 10:09:12.867858] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.797 [2024-07-25 10:09:12.867919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.797 [2024-07-25 10:09:12.867928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.797 [2024-07-25 10:09:12.867934] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.797 [2024-07-25 10:09:12.867940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.797 [2024-07-25 10:09:12.867966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.369 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:34.369 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:34.369 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:34.369 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.369 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.629 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.630 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:34.630 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:34.630 true 00:20:34.630 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:34.630 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:34.890 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:34.890 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:34.890 10:09:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:35.150 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.150 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:35.150 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:35.150 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:35.150 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:35.410 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.410 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:35.671 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:35.671 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:35.671 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.671 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:35.672 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:35.672 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:35.672 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:35.933 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:35.933 10:09:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:35.933 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:35.933 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:35.933 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:36.194 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:36.194 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.TmR28SnMSM 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.fq29gR4FDo 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.TmR28SnMSM 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.fq29gR4FDo 00:20:36.456 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:36.717 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:36.977 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.TmR28SnMSM 00:20:36.977 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.TmR28SnMSM 00:20:36.977 10:09:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.977 [2024-07-25 10:09:16.078711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.977 10:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:37.238 10:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:37.498 [2024-07-25 10:09:16.415551] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:37.498 [2024-07-25 10:09:16.415850] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.498 10:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:37.498 malloc0 00:20:37.498 10:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:37.758 10:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TmR28SnMSM 00:20:38.019 [2024-07-25 10:09:16.902582] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:38.019 10:09:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.TmR28SnMSM 00:20:38.019 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.011 Initializing NVMe Controllers 00:20:48.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:48.011 Initialization complete. Launching workers. 00:20:48.011 ======================================================== 00:20:48.011 Latency(us) 00:20:48.011 Device Information : IOPS MiB/s Average min max 00:20:48.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18944.73 74.00 3378.32 1053.41 5361.18 00:20:48.011 ======================================================== 00:20:48.011 Total : 18944.73 74.00 3378.32 1053.41 5361.18 00:20:48.011 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.TmR28SnMSM 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TmR28SnMSM' 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1323790 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1323790 /var/tmp/bdevperf.sock 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1323790 ']' 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.011 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.011 [2024-07-25 10:09:27.088228] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:48.011 [2024-07-25 10:09:27.088284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1323790 ] 00:20:48.011 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.011 [2024-07-25 10:09:27.136975] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.270 [2024-07-25 10:09:27.188933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.841 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:48.841 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:48.841 10:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TmR28SnMSM 00:20:49.103 [2024-07-25 10:09:27.977823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.103 [2024-07-25 10:09:27.977880] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:49.103 TLSTESTn1 00:20:49.103 10:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.103 Running I/O for 10 seconds... 00:20:59.128 00:20:59.128 Latency(us) 00:20:59.128 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.128 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.129 Verification LBA range: start 0x0 length 0x2000 00:20:59.129 TLSTESTn1 : 10.07 2100.09 8.20 0.00 0.00 60742.15 6116.69 120586.24 00:20:59.129 =================================================================================================================== 00:20:59.129 Total : 2100.09 8.20 0.00 0.00 60742.15 6116.69 120586.24 00:20:59.129 0 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1323790 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1323790 ']' 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1323790 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1323790 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1323790' 00:20:59.389 killing process with pid 1323790 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1323790 00:20:59.389 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.389 00:20:59.389 Latency(us) 00:20:59.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.389 =================================================================================================================== 00:20:59.389 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.389 [2024-07-25 10:09:38.334784] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1323790 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fq29gR4FDo 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fq29gR4FDo 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fq29gR4FDo 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fq29gR4FDo' 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1326095 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1326095 /var/tmp/bdevperf.sock 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1326095 ']' 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:59.389 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.389 [2024-07-25 10:09:38.500664] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:59.389 [2024-07-25 10:09:38.500721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326095 ] 00:20:59.713 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.713 [2024-07-25 10:09:38.549825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.713 [2024-07-25 10:09:38.601621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.285 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.285 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:00.285 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fq29gR4FDo 00:21:00.285 [2024-07-25 10:09:39.402388] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.285 [2024-07-25 10:09:39.402440] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.285 [2024-07-25 10:09:39.410083] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:00.285 [2024-07-25 10:09:39.410430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abec0 (107): Transport endpoint is not connected 00:21:00.285 [2024-07-25 10:09:39.411426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8abec0 (9): Bad file descriptor 00:21:00.285 [2024-07-25 10:09:39.412428] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:00.285 [2024-07-25 10:09:39.412437] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:00.285 [2024-07-25 10:09:39.412444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:00.285 request: 00:21:00.285 { 00:21:00.285 "name": "TLSTEST", 00:21:00.285 "trtype": "tcp", 00:21:00.285 "traddr": "10.0.0.2", 00:21:00.285 "adrfam": "ipv4", 00:21:00.285 "trsvcid": "4420", 00:21:00.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:00.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:00.285 "prchk_reftag": false, 00:21:00.285 "prchk_guard": false, 00:21:00.285 "hdgst": false, 00:21:00.285 "ddgst": false, 00:21:00.285 "psk": "/tmp/tmp.fq29gR4FDo", 00:21:00.285 "method": "bdev_nvme_attach_controller", 00:21:00.285 "req_id": 1 00:21:00.285 } 00:21:00.285 Got JSON-RPC error response 00:21:00.285 response: 00:21:00.285 { 00:21:00.285 "code": -5, 00:21:00.285 "message": "Input/output error" 00:21:00.285 } 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1326095 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1326095 ']' 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1326095 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1326095 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1326095' 00:21:00.547 killing process with pid 1326095 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1326095 00:21:00.547 Received shutdown signal, test time was about 10.000000 seconds 00:21:00.547 00:21:00.547 Latency(us) 00:21:00.547 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.547 =================================================================================================================== 00:21:00.547 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:00.547 [2024-07-25 10:09:39.497375] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1326095 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TmR28SnMSM 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TmR28SnMSM 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.TmR28SnMSM 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TmR28SnMSM' 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1326161 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1326161 /var/tmp/bdevperf.sock 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1326161 ']' 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:00.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.547 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:00.547 [2024-07-25 10:09:39.654604] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:00.547 [2024-07-25 10:09:39.654656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326161 ] 00:21:00.547 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.809 [2024-07-25 10:09:39.704355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.809 [2024-07-25 10:09:39.754835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.379 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.379 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:01.379 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.TmR28SnMSM 00:21:01.640 [2024-07-25 10:09:40.563964] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:01.640 [2024-07-25 10:09:40.564037] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:01.640 [2024-07-25 10:09:40.572851] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:01.640 [2024-07-25 10:09:40.572870] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:01.640 [2024-07-25 10:09:40.572889] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:01.640 [2024-07-25 10:09:40.573352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x888ec0 (107): Transport endpoint is not connected 00:21:01.640 [2024-07-25 10:09:40.574347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x888ec0 (9): Bad file descriptor 00:21:01.640 [2024-07-25 10:09:40.575348] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:01.640 [2024-07-25 10:09:40.575356] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:01.640 [2024-07-25 10:09:40.575363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:01.640 request: 00:21:01.640 { 00:21:01.640 "name": "TLSTEST", 00:21:01.640 "trtype": "tcp", 00:21:01.640 "traddr": "10.0.0.2", 00:21:01.640 "adrfam": "ipv4", 00:21:01.640 "trsvcid": "4420", 00:21:01.640 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:01.640 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:01.640 "prchk_reftag": false, 00:21:01.640 "prchk_guard": false, 00:21:01.640 "hdgst": false, 00:21:01.640 "ddgst": false, 00:21:01.640 "psk": "/tmp/tmp.TmR28SnMSM", 00:21:01.640 "method": "bdev_nvme_attach_controller", 00:21:01.640 "req_id": 1 00:21:01.640 } 00:21:01.640 Got JSON-RPC error response 00:21:01.640 response: 00:21:01.640 { 00:21:01.641 "code": -5, 00:21:01.641 "message": "Input/output error" 00:21:01.641 } 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1326161 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1326161 ']' 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1326161 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1326161 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1326161' 00:21:01.641 killing process with pid 1326161 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1326161 00:21:01.641 Received shutdown signal, test time was about 10.000000 seconds 00:21:01.641 00:21:01.641 Latency(us) 00:21:01.641 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.641 =================================================================================================================== 00:21:01.641 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:01.641 [2024-07-25 10:09:40.659787] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1326161 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TmR28SnMSM 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TmR28SnMSM 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.TmR28SnMSM 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.TmR28SnMSM' 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1326484 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1326484 /var/tmp/bdevperf.sock 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1326484 ']' 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:01.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:01.641 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.902 [2024-07-25 10:09:40.818528] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:01.902 [2024-07-25 10:09:40.818581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326484 ] 00:21:01.902 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.902 [2024-07-25 10:09:40.868223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.902 [2024-07-25 10:09:40.919520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:01.902 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:01.902 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:01.902 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.TmR28SnMSM 00:21:02.163 [2024-07-25 10:09:41.123606] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.163 [2024-07-25 10:09:41.123668] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:02.163 [2024-07-25 10:09:41.134193] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:02.163 [2024-07-25 10:09:41.134215] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:02.163 [2024-07-25 10:09:41.134234] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.163 [2024-07-25 10:09:41.134844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacbec0 (107): Transport endpoint is not connected 00:21:02.163 [2024-07-25 10:09:41.135839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xacbec0 (9): Bad file descriptor 00:21:02.163 [2024-07-25 10:09:41.136841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:02.163 [2024-07-25 10:09:41.136848] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:02.163 [2024-07-25 10:09:41.136855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:02.163 request: 00:21:02.163 { 00:21:02.163 "name": "TLSTEST", 00:21:02.163 "trtype": "tcp", 00:21:02.163 "traddr": "10.0.0.2", 00:21:02.163 "adrfam": "ipv4", 00:21:02.163 "trsvcid": "4420", 00:21:02.163 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:02.163 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:02.163 "prchk_reftag": false, 00:21:02.163 "prchk_guard": false, 00:21:02.163 "hdgst": false, 00:21:02.163 "ddgst": false, 00:21:02.163 "psk": "/tmp/tmp.TmR28SnMSM", 00:21:02.163 "method": "bdev_nvme_attach_controller", 00:21:02.163 "req_id": 1 00:21:02.163 } 00:21:02.163 Got JSON-RPC error response 00:21:02.163 response: 00:21:02.163 { 00:21:02.163 "code": -5, 00:21:02.163 "message": "Input/output error" 00:21:02.163 } 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1326484 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1326484 ']' 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1326484 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1326484 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1326484' 00:21:02.163 killing process with pid 1326484 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1326484 00:21:02.163 Received shutdown signal, test time was about 10.000000 seconds 00:21:02.163 00:21:02.163 Latency(us) 00:21:02.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:02.163 =================================================================================================================== 00:21:02.163 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:02.163 [2024-07-25 10:09:41.223570] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:02.163 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1326484 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1326524 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1326524 /var/tmp/bdevperf.sock 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1326524 ']' 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:02.425 10:09:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.425 [2024-07-25 10:09:41.381255] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:02.425 [2024-07-25 10:09:41.381309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326524 ] 00:21:02.425 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.425 [2024-07-25 10:09:41.431481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.425 [2024-07-25 10:09:41.482986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:03.367 [2024-07-25 10:09:42.297049] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:03.367 [2024-07-25 10:09:42.298752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18134a0 (9): Bad file descriptor 00:21:03.367 [2024-07-25 10:09:42.299751] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:03.367 [2024-07-25 10:09:42.299759] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:03.367 [2024-07-25 10:09:42.299765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:03.367 request: 00:21:03.367 { 00:21:03.367 "name": "TLSTEST", 00:21:03.367 "trtype": "tcp", 00:21:03.367 "traddr": "10.0.0.2", 00:21:03.367 "adrfam": "ipv4", 00:21:03.367 "trsvcid": "4420", 00:21:03.367 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:03.367 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:03.367 "prchk_reftag": false, 00:21:03.367 "prchk_guard": false, 00:21:03.367 "hdgst": false, 00:21:03.367 "ddgst": false, 00:21:03.367 "method": "bdev_nvme_attach_controller", 00:21:03.367 "req_id": 1 00:21:03.367 } 00:21:03.367 Got JSON-RPC error response 00:21:03.367 response: 00:21:03.367 { 00:21:03.367 "code": -5, 00:21:03.367 "message": "Input/output error" 00:21:03.367 } 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1326524 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1326524 ']' 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1326524 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1326524 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:03.367 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1326524' 00:21:03.368 killing process with pid 1326524 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1326524 00:21:03.368 Received shutdown signal, test time was about 10.000000 seconds 00:21:03.368 00:21:03.368 Latency(us) 00:21:03.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:03.368 =================================================================================================================== 00:21:03.368 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1326524 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 1321049 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1321049 ']' 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1321049 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.368 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1321049 00:21:03.629 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:03.629 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:03.629 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1321049' 00:21:03.629 killing process with pid 1321049 00:21:03.629 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1321049 00:21:03.629 [2024-07-25 10:09:42.540819] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:03.629 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1321049 00:21:03.629 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.UcYOHjkV3h 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.UcYOHjkV3h 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1326850 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1326850 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1326850 ']' 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:03.630 10:09:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:03.891 [2024-07-25 10:09:42.781420] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:03.891 [2024-07-25 10:09:42.781474] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.891 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.891 [2024-07-25 10:09:42.864329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.891 [2024-07-25 10:09:42.918523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.891 [2024-07-25 10:09:42.918558] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.891 [2024-07-25 10:09:42.918563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.891 [2024-07-25 10:09:42.918568] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.891 [2024-07-25 10:09:42.918572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.891 [2024-07-25 10:09:42.918589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.UcYOHjkV3h 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UcYOHjkV3h 00:21:04.470 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:04.730 [2024-07-25 10:09:43.716340] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.730 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:04.991 10:09:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:04.991 [2024-07-25 10:09:44.025102] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.991 [2024-07-25 10:09:44.025283] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.991 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:05.252 malloc0 00:21:05.252 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:05.252 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:05.513 [2024-07-25 10:09:44.492205] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcYOHjkV3h 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UcYOHjkV3h' 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.513 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1327216 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1327216 /var/tmp/bdevperf.sock 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1327216 ']' 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.514 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.514 [2024-07-25 10:09:44.538568] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:05.514 [2024-07-25 10:09:44.538616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327216 ] 00:21:05.514 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.514 [2024-07-25 10:09:44.587162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.514 [2024-07-25 10:09:44.639610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.774 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:05.774 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:05.774 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:05.774 [2024-07-25 10:09:44.851083] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.774 [2024-07-25 10:09:44.851138] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:06.035 TLSTESTn1 00:21:06.035 10:09:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:06.035 Running I/O for 10 seconds... 00:21:16.039 00:21:16.039 Latency(us) 00:21:16.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.039 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:16.039 Verification LBA range: start 0x0 length 0x2000 00:21:16.039 TLSTESTn1 : 10.08 2109.24 8.24 0.00 0.00 60482.47 6089.39 109663.57 00:21:16.039 =================================================================================================================== 00:21:16.039 Total : 2109.24 8.24 0.00 0.00 60482.47 6089.39 109663.57 00:21:16.039 0 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 1327216 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1327216 ']' 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1327216 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1327216 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1327216' 00:21:16.301 killing process with pid 1327216 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1327216 00:21:16.301 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.301 00:21:16.301 Latency(us) 00:21:16.301 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.301 =================================================================================================================== 00:21:16.301 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.301 [2024-07-25 10:09:55.245077] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1327216 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.UcYOHjkV3h 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcYOHjkV3h 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcYOHjkV3h 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UcYOHjkV3h 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UcYOHjkV3h' 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1329367 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1329367 /var/tmp/bdevperf.sock 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1329367 ']' 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.301 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.301 [2024-07-25 10:09:55.414567] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:16.301 [2024-07-25 10:09:55.414625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329367 ] 00:21:16.562 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.562 [2024-07-25 10:09:55.464118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.562 [2024-07-25 10:09:55.516106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:17.134 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.134 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:17.134 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:17.396 [2024-07-25 10:09:56.301027] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:17.396 [2024-07-25 10:09:56.301065] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:17.396 [2024-07-25 10:09:56.301070] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.UcYOHjkV3h 00:21:17.396 request: 00:21:17.396 { 00:21:17.396 "name": "TLSTEST", 00:21:17.396 "trtype": "tcp", 00:21:17.396 "traddr": "10.0.0.2", 00:21:17.396 "adrfam": "ipv4", 00:21:17.396 "trsvcid": "4420", 00:21:17.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:17.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:17.396 "prchk_reftag": false, 00:21:17.396 "prchk_guard": false, 00:21:17.396 "hdgst": false, 00:21:17.396 "ddgst": false, 00:21:17.396 "psk": "/tmp/tmp.UcYOHjkV3h", 00:21:17.396 "method": "bdev_nvme_attach_controller", 00:21:17.396 "req_id": 1 00:21:17.396 } 00:21:17.396 Got JSON-RPC error response 00:21:17.396 response: 00:21:17.396 { 00:21:17.396 "code": -1, 00:21:17.396 "message": "Operation not permitted" 00:21:17.396 } 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 1329367 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1329367 ']' 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1329367 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1329367 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1329367' 00:21:17.396 killing process with pid 1329367 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1329367 00:21:17.396 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.396 00:21:17.396 Latency(us) 00:21:17.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.396 =================================================================================================================== 00:21:17.396 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1329367 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 1326850 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1326850 ']' 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1326850 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1326850 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1326850' 00:21:17.396 killing process with pid 1326850 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1326850 00:21:17.396 [2024-07-25 10:09:56.528943] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:17.396 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1326850 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1329573 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1329573 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1329573 ']' 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:17.657 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.657 [2024-07-25 10:09:56.706672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:17.657 [2024-07-25 10:09:56.706728] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.657 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.657 [2024-07-25 10:09:56.788877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.918 [2024-07-25 10:09:56.841769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.918 [2024-07-25 10:09:56.841799] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.918 [2024-07-25 10:09:56.841805] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.918 [2024-07-25 10:09:56.841810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.918 [2024-07-25 10:09:56.841814] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.918 [2024-07-25 10:09:56.841828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.UcYOHjkV3h 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UcYOHjkV3h 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.UcYOHjkV3h 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UcYOHjkV3h 00:21:18.490 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:18.763 [2024-07-25 10:09:57.655608] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.763 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.763 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:19.074 [2024-07-25 10:09:57.952337] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.074 [2024-07-25 10:09:57.952505] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.074 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:19.074 malloc0 00:21:19.074 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:19.335 [2024-07-25 10:09:58.379057] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:19.335 [2024-07-25 10:09:58.379074] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:19.335 [2024-07-25 10:09:58.379093] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:19.335 request: 00:21:19.335 { 00:21:19.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.335 "host": "nqn.2016-06.io.spdk:host1", 00:21:19.335 "psk": "/tmp/tmp.UcYOHjkV3h", 00:21:19.335 "method": "nvmf_subsystem_add_host", 00:21:19.335 "req_id": 1 00:21:19.335 } 00:21:19.335 Got JSON-RPC error response 00:21:19.335 response: 00:21:19.335 { 00:21:19.335 "code": -32603, 00:21:19.335 "message": "Internal error" 00:21:19.335 } 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 1329573 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1329573 ']' 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1329573 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1329573 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1329573' 00:21:19.335 killing process with pid 1329573 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1329573 00:21:19.335 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1329573 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.UcYOHjkV3h 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1329965 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1329965 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1329965 ']' 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.596 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.596 [2024-07-25 10:09:58.632889] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:19.596 [2024-07-25 10:09:58.632943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.596 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.596 [2024-07-25 10:09:58.715707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.857 [2024-07-25 10:09:58.768059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.857 [2024-07-25 10:09:58.768094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.857 [2024-07-25 10:09:58.768099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.857 [2024-07-25 10:09:58.768104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.857 [2024-07-25 10:09:58.768108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.857 [2024-07-25 10:09:58.768124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.UcYOHjkV3h 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UcYOHjkV3h 00:21:20.428 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:20.689 [2024-07-25 10:09:59.566143] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.689 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:20.689 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:20.950 [2024-07-25 10:09:59.862868] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.950 [2024-07-25 10:09:59.863038] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.950 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.950 malloc0 00:21:20.950 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:21.211 [2024-07-25 10:10:00.306415] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1330322 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1330322 /var/tmp/bdevperf.sock 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1330322 ']' 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:21.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:21.211 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:21.473 [2024-07-25 10:10:00.367709] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:21.473 [2024-07-25 10:10:00.367762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1330322 ] 00:21:21.473 EAL: No free 2048 kB hugepages reported on node 1 00:21:21.473 [2024-07-25 10:10:00.417916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.473 [2024-07-25 10:10:00.469777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:22.044 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:22.044 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:22.044 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:22.304 [2024-07-25 10:10:01.274728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:22.304 [2024-07-25 10:10:01.274796] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:22.304 TLSTESTn1 00:21:22.304 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:22.565 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:22.565 "subsystems": [ 00:21:22.565 { 00:21:22.565 "subsystem": "keyring", 00:21:22.565 "config": [] 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "subsystem": "iobuf", 00:21:22.565 "config": [ 00:21:22.565 { 00:21:22.565 "method": "iobuf_set_options", 00:21:22.565 "params": { 00:21:22.565 "small_pool_count": 8192, 00:21:22.565 "large_pool_count": 1024, 00:21:22.565 "small_bufsize": 8192, 00:21:22.565 "large_bufsize": 135168 00:21:22.565 } 00:21:22.565 } 00:21:22.565 ] 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "subsystem": "sock", 00:21:22.565 "config": [ 00:21:22.565 { 00:21:22.565 "method": "sock_set_default_impl", 00:21:22.565 "params": { 00:21:22.565 "impl_name": "posix" 00:21:22.565 } 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "method": "sock_impl_set_options", 00:21:22.565 "params": { 00:21:22.565 "impl_name": "ssl", 00:21:22.565 "recv_buf_size": 4096, 00:21:22.565 "send_buf_size": 4096, 00:21:22.565 "enable_recv_pipe": true, 00:21:22.565 "enable_quickack": false, 00:21:22.565 "enable_placement_id": 0, 00:21:22.565 "enable_zerocopy_send_server": true, 00:21:22.565 "enable_zerocopy_send_client": false, 00:21:22.565 "zerocopy_threshold": 0, 00:21:22.565 "tls_version": 0, 00:21:22.565 "enable_ktls": false 00:21:22.565 } 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "method": "sock_impl_set_options", 00:21:22.565 "params": { 00:21:22.565 "impl_name": "posix", 00:21:22.565 "recv_buf_size": 2097152, 00:21:22.565 "send_buf_size": 2097152, 00:21:22.565 "enable_recv_pipe": true, 00:21:22.565 "enable_quickack": false, 00:21:22.565 "enable_placement_id": 0, 00:21:22.565 "enable_zerocopy_send_server": true, 00:21:22.565 "enable_zerocopy_send_client": false, 00:21:22.565 "zerocopy_threshold": 0, 00:21:22.565 "tls_version": 0, 00:21:22.565 "enable_ktls": false 00:21:22.565 } 00:21:22.565 } 00:21:22.565 ] 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "subsystem": "vmd", 00:21:22.565 "config": [] 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "subsystem": "accel", 00:21:22.565 "config": [ 00:21:22.565 { 00:21:22.565 "method": "accel_set_options", 00:21:22.565 "params": { 00:21:22.565 "small_cache_size": 128, 00:21:22.565 "large_cache_size": 16, 00:21:22.565 "task_count": 2048, 00:21:22.565 "sequence_count": 2048, 00:21:22.565 "buf_count": 2048 00:21:22.565 } 00:21:22.565 } 00:21:22.565 ] 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "subsystem": "bdev", 00:21:22.565 "config": [ 00:21:22.565 { 00:21:22.565 "method": "bdev_set_options", 00:21:22.565 "params": { 00:21:22.565 "bdev_io_pool_size": 65535, 00:21:22.565 "bdev_io_cache_size": 256, 00:21:22.565 "bdev_auto_examine": true, 00:21:22.565 "iobuf_small_cache_size": 128, 00:21:22.565 "iobuf_large_cache_size": 16 00:21:22.565 } 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "method": "bdev_raid_set_options", 00:21:22.565 "params": { 00:21:22.565 "process_window_size_kb": 1024, 00:21:22.565 "process_max_bandwidth_mb_sec": 0 00:21:22.565 } 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "method": "bdev_iscsi_set_options", 00:21:22.565 "params": { 00:21:22.565 "timeout_sec": 30 00:21:22.565 } 00:21:22.565 }, 00:21:22.565 { 00:21:22.565 "method": "bdev_nvme_set_options", 00:21:22.565 "params": { 00:21:22.565 "action_on_timeout": "none", 00:21:22.566 "timeout_us": 0, 00:21:22.566 "timeout_admin_us": 0, 00:21:22.566 "keep_alive_timeout_ms": 10000, 00:21:22.566 "arbitration_burst": 0, 00:21:22.566 "low_priority_weight": 0, 00:21:22.566 "medium_priority_weight": 0, 00:21:22.566 "high_priority_weight": 0, 00:21:22.566 "nvme_adminq_poll_period_us": 10000, 00:21:22.566 "nvme_ioq_poll_period_us": 0, 00:21:22.566 "io_queue_requests": 0, 00:21:22.566 "delay_cmd_submit": true, 00:21:22.566 "transport_retry_count": 4, 00:21:22.566 "bdev_retry_count": 3, 00:21:22.566 "transport_ack_timeout": 0, 00:21:22.566 "ctrlr_loss_timeout_sec": 0, 00:21:22.566 "reconnect_delay_sec": 0, 00:21:22.566 "fast_io_fail_timeout_sec": 0, 00:21:22.566 "disable_auto_failback": false, 00:21:22.566 "generate_uuids": false, 00:21:22.566 "transport_tos": 0, 00:21:22.566 "nvme_error_stat": false, 00:21:22.566 "rdma_srq_size": 0, 00:21:22.566 "io_path_stat": false, 00:21:22.566 "allow_accel_sequence": false, 00:21:22.566 "rdma_max_cq_size": 0, 00:21:22.566 "rdma_cm_event_timeout_ms": 0, 00:21:22.566 "dhchap_digests": [ 00:21:22.566 "sha256", 00:21:22.566 "sha384", 00:21:22.566 "sha512" 00:21:22.566 ], 00:21:22.566 "dhchap_dhgroups": [ 00:21:22.566 "null", 00:21:22.566 "ffdhe2048", 00:21:22.566 "ffdhe3072", 00:21:22.566 "ffdhe4096", 00:21:22.566 "ffdhe6144", 00:21:22.566 "ffdhe8192" 00:21:22.566 ] 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "bdev_nvme_set_hotplug", 00:21:22.566 "params": { 00:21:22.566 "period_us": 100000, 00:21:22.566 "enable": false 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "bdev_malloc_create", 00:21:22.566 "params": { 00:21:22.566 "name": "malloc0", 00:21:22.566 "num_blocks": 8192, 00:21:22.566 "block_size": 4096, 00:21:22.566 "physical_block_size": 4096, 00:21:22.566 "uuid": "600ef14e-61d0-4c6f-964f-7c0dd5e3211b", 00:21:22.566 "optimal_io_boundary": 0, 00:21:22.566 "md_size": 0, 00:21:22.566 "dif_type": 0, 00:21:22.566 "dif_is_head_of_md": false, 00:21:22.566 "dif_pi_format": 0 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "bdev_wait_for_examine" 00:21:22.566 } 00:21:22.566 ] 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "subsystem": "nbd", 00:21:22.566 "config": [] 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "subsystem": "scheduler", 00:21:22.566 "config": [ 00:21:22.566 { 00:21:22.566 "method": "framework_set_scheduler", 00:21:22.566 "params": { 00:21:22.566 "name": "static" 00:21:22.566 } 00:21:22.566 } 00:21:22.566 ] 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "subsystem": "nvmf", 00:21:22.566 "config": [ 00:21:22.566 { 00:21:22.566 "method": "nvmf_set_config", 00:21:22.566 "params": { 00:21:22.566 "discovery_filter": "match_any", 00:21:22.566 "admin_cmd_passthru": { 00:21:22.566 "identify_ctrlr": false 00:21:22.566 } 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_set_max_subsystems", 00:21:22.566 "params": { 00:21:22.566 "max_subsystems": 1024 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_set_crdt", 00:21:22.566 "params": { 00:21:22.566 "crdt1": 0, 00:21:22.566 "crdt2": 0, 00:21:22.566 "crdt3": 0 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_create_transport", 00:21:22.566 "params": { 00:21:22.566 "trtype": "TCP", 00:21:22.566 "max_queue_depth": 128, 00:21:22.566 "max_io_qpairs_per_ctrlr": 127, 00:21:22.566 "in_capsule_data_size": 4096, 00:21:22.566 "max_io_size": 131072, 00:21:22.566 "io_unit_size": 131072, 00:21:22.566 "max_aq_depth": 128, 00:21:22.566 "num_shared_buffers": 511, 00:21:22.566 "buf_cache_size": 4294967295, 00:21:22.566 "dif_insert_or_strip": false, 00:21:22.566 "zcopy": false, 00:21:22.566 "c2h_success": false, 00:21:22.566 "sock_priority": 0, 00:21:22.566 "abort_timeout_sec": 1, 00:21:22.566 "ack_timeout": 0, 00:21:22.566 "data_wr_pool_size": 0 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_create_subsystem", 00:21:22.566 "params": { 00:21:22.566 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.566 "allow_any_host": false, 00:21:22.566 "serial_number": "SPDK00000000000001", 00:21:22.566 "model_number": "SPDK bdev Controller", 00:21:22.566 "max_namespaces": 10, 00:21:22.566 "min_cntlid": 1, 00:21:22.566 "max_cntlid": 65519, 00:21:22.566 "ana_reporting": false 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_subsystem_add_host", 00:21:22.566 "params": { 00:21:22.566 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.566 "host": "nqn.2016-06.io.spdk:host1", 00:21:22.566 "psk": "/tmp/tmp.UcYOHjkV3h" 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_subsystem_add_ns", 00:21:22.566 "params": { 00:21:22.566 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.566 "namespace": { 00:21:22.566 "nsid": 1, 00:21:22.566 "bdev_name": "malloc0", 00:21:22.566 "nguid": "600EF14E61D04C6F964F7C0DD5E3211B", 00:21:22.566 "uuid": "600ef14e-61d0-4c6f-964f-7c0dd5e3211b", 00:21:22.566 "no_auto_visible": false 00:21:22.566 } 00:21:22.566 } 00:21:22.566 }, 00:21:22.566 { 00:21:22.566 "method": "nvmf_subsystem_add_listener", 00:21:22.566 "params": { 00:21:22.566 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.566 "listen_address": { 00:21:22.566 "trtype": "TCP", 00:21:22.566 "adrfam": "IPv4", 00:21:22.566 "traddr": "10.0.0.2", 00:21:22.566 "trsvcid": "4420" 00:21:22.566 }, 00:21:22.566 "secure_channel": true 00:21:22.566 } 00:21:22.566 } 00:21:22.566 ] 00:21:22.566 } 00:21:22.566 ] 00:21:22.566 }' 00:21:22.566 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:22.827 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:22.827 "subsystems": [ 00:21:22.827 { 00:21:22.827 "subsystem": "keyring", 00:21:22.827 "config": [] 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "subsystem": "iobuf", 00:21:22.827 "config": [ 00:21:22.827 { 00:21:22.827 "method": "iobuf_set_options", 00:21:22.827 "params": { 00:21:22.827 "small_pool_count": 8192, 00:21:22.827 "large_pool_count": 1024, 00:21:22.827 "small_bufsize": 8192, 00:21:22.827 "large_bufsize": 135168 00:21:22.827 } 00:21:22.827 } 00:21:22.827 ] 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "subsystem": "sock", 00:21:22.827 "config": [ 00:21:22.827 { 00:21:22.827 "method": "sock_set_default_impl", 00:21:22.827 "params": { 00:21:22.827 "impl_name": "posix" 00:21:22.827 } 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "method": "sock_impl_set_options", 00:21:22.827 "params": { 00:21:22.827 "impl_name": "ssl", 00:21:22.827 "recv_buf_size": 4096, 00:21:22.827 "send_buf_size": 4096, 00:21:22.827 "enable_recv_pipe": true, 00:21:22.827 "enable_quickack": false, 00:21:22.827 "enable_placement_id": 0, 00:21:22.827 "enable_zerocopy_send_server": true, 00:21:22.827 "enable_zerocopy_send_client": false, 00:21:22.827 "zerocopy_threshold": 0, 00:21:22.827 "tls_version": 0, 00:21:22.827 "enable_ktls": false 00:21:22.827 } 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "method": "sock_impl_set_options", 00:21:22.827 "params": { 00:21:22.827 "impl_name": "posix", 00:21:22.827 "recv_buf_size": 2097152, 00:21:22.827 "send_buf_size": 2097152, 00:21:22.827 "enable_recv_pipe": true, 00:21:22.827 "enable_quickack": false, 00:21:22.827 "enable_placement_id": 0, 00:21:22.827 "enable_zerocopy_send_server": true, 00:21:22.827 "enable_zerocopy_send_client": false, 00:21:22.827 "zerocopy_threshold": 0, 00:21:22.827 "tls_version": 0, 00:21:22.827 "enable_ktls": false 00:21:22.827 } 00:21:22.827 } 00:21:22.827 ] 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "subsystem": "vmd", 00:21:22.827 "config": [] 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "subsystem": "accel", 00:21:22.827 "config": [ 00:21:22.827 { 00:21:22.827 "method": "accel_set_options", 00:21:22.827 "params": { 00:21:22.827 "small_cache_size": 128, 00:21:22.827 "large_cache_size": 16, 00:21:22.827 "task_count": 2048, 00:21:22.827 "sequence_count": 2048, 00:21:22.827 "buf_count": 2048 00:21:22.827 } 00:21:22.827 } 00:21:22.827 ] 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "subsystem": "bdev", 00:21:22.827 "config": [ 00:21:22.827 { 00:21:22.827 "method": "bdev_set_options", 00:21:22.827 "params": { 00:21:22.827 "bdev_io_pool_size": 65535, 00:21:22.827 "bdev_io_cache_size": 256, 00:21:22.827 "bdev_auto_examine": true, 00:21:22.827 "iobuf_small_cache_size": 128, 00:21:22.827 "iobuf_large_cache_size": 16 00:21:22.827 } 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "method": "bdev_raid_set_options", 00:21:22.827 "params": { 00:21:22.827 "process_window_size_kb": 1024, 00:21:22.827 "process_max_bandwidth_mb_sec": 0 00:21:22.827 } 00:21:22.827 }, 00:21:22.827 { 00:21:22.827 "method": "bdev_iscsi_set_options", 00:21:22.827 "params": { 00:21:22.827 "timeout_sec": 30 00:21:22.827 } 00:21:22.827 }, 00:21:22.827 { 00:21:22.828 "method": "bdev_nvme_set_options", 00:21:22.828 "params": { 00:21:22.828 "action_on_timeout": "none", 00:21:22.828 "timeout_us": 0, 00:21:22.828 "timeout_admin_us": 0, 00:21:22.828 "keep_alive_timeout_ms": 10000, 00:21:22.828 "arbitration_burst": 0, 00:21:22.828 "low_priority_weight": 0, 00:21:22.828 "medium_priority_weight": 0, 00:21:22.828 "high_priority_weight": 0, 00:21:22.828 "nvme_adminq_poll_period_us": 10000, 00:21:22.828 "nvme_ioq_poll_period_us": 0, 00:21:22.828 "io_queue_requests": 512, 00:21:22.828 "delay_cmd_submit": true, 00:21:22.828 "transport_retry_count": 4, 00:21:22.828 "bdev_retry_count": 3, 00:21:22.828 "transport_ack_timeout": 0, 00:21:22.828 "ctrlr_loss_timeout_sec": 0, 00:21:22.828 "reconnect_delay_sec": 0, 00:21:22.828 "fast_io_fail_timeout_sec": 0, 00:21:22.828 "disable_auto_failback": false, 00:21:22.828 "generate_uuids": false, 00:21:22.828 "transport_tos": 0, 00:21:22.828 "nvme_error_stat": false, 00:21:22.828 "rdma_srq_size": 0, 00:21:22.828 "io_path_stat": false, 00:21:22.828 "allow_accel_sequence": false, 00:21:22.828 "rdma_max_cq_size": 0, 00:21:22.828 "rdma_cm_event_timeout_ms": 0, 00:21:22.828 "dhchap_digests": [ 00:21:22.828 "sha256", 00:21:22.828 "sha384", 00:21:22.828 "sha512" 00:21:22.828 ], 00:21:22.828 "dhchap_dhgroups": [ 00:21:22.828 "null", 00:21:22.828 "ffdhe2048", 00:21:22.828 "ffdhe3072", 00:21:22.828 "ffdhe4096", 00:21:22.828 "ffdhe6144", 00:21:22.828 "ffdhe8192" 00:21:22.828 ] 00:21:22.828 } 00:21:22.828 }, 00:21:22.828 { 00:21:22.828 "method": "bdev_nvme_attach_controller", 00:21:22.828 "params": { 00:21:22.828 "name": "TLSTEST", 00:21:22.828 "trtype": "TCP", 00:21:22.828 "adrfam": "IPv4", 00:21:22.828 "traddr": "10.0.0.2", 00:21:22.828 "trsvcid": "4420", 00:21:22.828 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.828 "prchk_reftag": false, 00:21:22.828 "prchk_guard": false, 00:21:22.828 "ctrlr_loss_timeout_sec": 0, 00:21:22.828 "reconnect_delay_sec": 0, 00:21:22.828 "fast_io_fail_timeout_sec": 0, 00:21:22.828 "psk": "/tmp/tmp.UcYOHjkV3h", 00:21:22.828 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.828 "hdgst": false, 00:21:22.828 "ddgst": false 00:21:22.828 } 00:21:22.828 }, 00:21:22.828 { 00:21:22.828 "method": "bdev_nvme_set_hotplug", 00:21:22.828 "params": { 00:21:22.828 "period_us": 100000, 00:21:22.828 "enable": false 00:21:22.828 } 00:21:22.828 }, 00:21:22.828 { 00:21:22.828 "method": "bdev_wait_for_examine" 00:21:22.828 } 00:21:22.828 ] 00:21:22.828 }, 00:21:22.828 { 00:21:22.828 "subsystem": "nbd", 00:21:22.828 "config": [] 00:21:22.828 } 00:21:22.828 ] 00:21:22.828 }' 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 1330322 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1330322 ']' 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1330322 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330322 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330322' 00:21:22.828 killing process with pid 1330322 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1330322 00:21:22.828 Received shutdown signal, test time was about 10.000000 seconds 00:21:22.828 00:21:22.828 Latency(us) 00:21:22.828 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.828 =================================================================================================================== 00:21:22.828 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:22.828 [2024-07-25 10:10:01.895707] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:22.828 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1330322 00:21:23.089 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 1329965 00:21:23.089 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1329965 ']' 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1329965 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1329965 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1329965' 00:21:23.090 killing process with pid 1329965 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1329965 00:21:23.090 [2024-07-25 10:10:02.037632] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1329965 00:21:23.090 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:23.090 "subsystems": [ 00:21:23.090 { 00:21:23.090 "subsystem": "keyring", 00:21:23.090 "config": [] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "iobuf", 00:21:23.090 "config": [ 00:21:23.090 { 00:21:23.090 "method": "iobuf_set_options", 00:21:23.090 "params": { 00:21:23.090 "small_pool_count": 8192, 00:21:23.090 "large_pool_count": 1024, 00:21:23.090 "small_bufsize": 8192, 00:21:23.090 "large_bufsize": 135168 00:21:23.090 } 00:21:23.090 } 00:21:23.090 ] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "sock", 00:21:23.090 "config": [ 00:21:23.090 { 00:21:23.090 "method": "sock_set_default_impl", 00:21:23.090 "params": { 00:21:23.090 "impl_name": "posix" 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "sock_impl_set_options", 00:21:23.090 "params": { 00:21:23.090 "impl_name": "ssl", 00:21:23.090 "recv_buf_size": 4096, 00:21:23.090 "send_buf_size": 4096, 00:21:23.090 "enable_recv_pipe": true, 00:21:23.090 "enable_quickack": false, 00:21:23.090 "enable_placement_id": 0, 00:21:23.090 "enable_zerocopy_send_server": true, 00:21:23.090 "enable_zerocopy_send_client": false, 00:21:23.090 "zerocopy_threshold": 0, 00:21:23.090 "tls_version": 0, 00:21:23.090 "enable_ktls": false 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "sock_impl_set_options", 00:21:23.090 "params": { 00:21:23.090 "impl_name": "posix", 00:21:23.090 "recv_buf_size": 2097152, 00:21:23.090 "send_buf_size": 2097152, 00:21:23.090 "enable_recv_pipe": true, 00:21:23.090 "enable_quickack": false, 00:21:23.090 "enable_placement_id": 0, 00:21:23.090 "enable_zerocopy_send_server": true, 00:21:23.090 "enable_zerocopy_send_client": false, 00:21:23.090 "zerocopy_threshold": 0, 00:21:23.090 "tls_version": 0, 00:21:23.090 "enable_ktls": false 00:21:23.090 } 00:21:23.090 } 00:21:23.090 ] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "vmd", 00:21:23.090 "config": [] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "accel", 00:21:23.090 "config": [ 00:21:23.090 { 00:21:23.090 "method": "accel_set_options", 00:21:23.090 "params": { 00:21:23.090 "small_cache_size": 128, 00:21:23.090 "large_cache_size": 16, 00:21:23.090 "task_count": 2048, 00:21:23.090 "sequence_count": 2048, 00:21:23.090 "buf_count": 2048 00:21:23.090 } 00:21:23.090 } 00:21:23.090 ] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "bdev", 00:21:23.090 "config": [ 00:21:23.090 { 00:21:23.090 "method": "bdev_set_options", 00:21:23.090 "params": { 00:21:23.090 "bdev_io_pool_size": 65535, 00:21:23.090 "bdev_io_cache_size": 256, 00:21:23.090 "bdev_auto_examine": true, 00:21:23.090 "iobuf_small_cache_size": 128, 00:21:23.090 "iobuf_large_cache_size": 16 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "bdev_raid_set_options", 00:21:23.090 "params": { 00:21:23.090 "process_window_size_kb": 1024, 00:21:23.090 "process_max_bandwidth_mb_sec": 0 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "bdev_iscsi_set_options", 00:21:23.090 "params": { 00:21:23.090 "timeout_sec": 30 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "bdev_nvme_set_options", 00:21:23.090 "params": { 00:21:23.090 "action_on_timeout": "none", 00:21:23.090 "timeout_us": 0, 00:21:23.090 "timeout_admin_us": 0, 00:21:23.090 "keep_alive_timeout_ms": 10000, 00:21:23.090 "arbitration_burst": 0, 00:21:23.090 "low_priority_weight": 0, 00:21:23.090 "medium_priority_weight": 0, 00:21:23.090 "high_priority_weight": 0, 00:21:23.090 "nvme_adminq_poll_period_us": 10000, 00:21:23.090 "nvme_ioq_poll_period_us": 0, 00:21:23.090 "io_queue_requests": 0, 00:21:23.090 "delay_cmd_submit": true, 00:21:23.090 "transport_retry_count": 4, 00:21:23.090 "bdev_retry_count": 3, 00:21:23.090 "transport_ack_timeout": 0, 00:21:23.090 "ctrlr_loss_timeout_sec": 0, 00:21:23.090 "reconnect_delay_sec": 0, 00:21:23.090 "fast_io_fail_timeout_sec": 0, 00:21:23.090 "disable_auto_failback": false, 00:21:23.090 "generate_uuids": false, 00:21:23.090 "transport_tos": 0, 00:21:23.090 "nvme_error_stat": false, 00:21:23.090 "rdma_srq_size": 0, 00:21:23.090 "io_path_stat": false, 00:21:23.090 "allow_accel_sequence": false, 00:21:23.090 "rdma_max_cq_size": 0, 00:21:23.090 "rdma_cm_event_timeout_ms": 0, 00:21:23.090 "dhchap_digests": [ 00:21:23.090 "sha256", 00:21:23.090 "sha384", 00:21:23.090 "sha512" 00:21:23.090 ], 00:21:23.090 "dhchap_dhgroups": [ 00:21:23.090 "null", 00:21:23.090 "ffdhe2048", 00:21:23.090 "ffdhe3072", 00:21:23.090 "ffdhe4096", 00:21:23.090 "ffdhe6144", 00:21:23.090 "ffdhe8192" 00:21:23.090 ] 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "bdev_nvme_set_hotplug", 00:21:23.090 "params": { 00:21:23.090 "period_us": 100000, 00:21:23.090 "enable": false 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "bdev_malloc_create", 00:21:23.090 "params": { 00:21:23.090 "name": "malloc0", 00:21:23.090 "num_blocks": 8192, 00:21:23.090 "block_size": 4096, 00:21:23.090 "physical_block_size": 4096, 00:21:23.090 "uuid": "600ef14e-61d0-4c6f-964f-7c0dd5e3211b", 00:21:23.090 "optimal_io_boundary": 0, 00:21:23.090 "md_size": 0, 00:21:23.090 "dif_type": 0, 00:21:23.090 "dif_is_head_of_md": false, 00:21:23.090 "dif_pi_format": 0 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "bdev_wait_for_examine" 00:21:23.090 } 00:21:23.090 ] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "nbd", 00:21:23.090 "config": [] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "scheduler", 00:21:23.090 "config": [ 00:21:23.090 { 00:21:23.090 "method": "framework_set_scheduler", 00:21:23.090 "params": { 00:21:23.090 "name": "static" 00:21:23.090 } 00:21:23.090 } 00:21:23.090 ] 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "subsystem": "nvmf", 00:21:23.090 "config": [ 00:21:23.090 { 00:21:23.090 "method": "nvmf_set_config", 00:21:23.090 "params": { 00:21:23.090 "discovery_filter": "match_any", 00:21:23.090 "admin_cmd_passthru": { 00:21:23.090 "identify_ctrlr": false 00:21:23.090 } 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.090 "method": "nvmf_set_max_subsystems", 00:21:23.090 "params": { 00:21:23.090 "max_subsystems": 1024 00:21:23.090 } 00:21:23.090 }, 00:21:23.090 { 00:21:23.091 "method": "nvmf_set_crdt", 00:21:23.091 "params": { 00:21:23.091 "crdt1": 0, 00:21:23.091 "crdt2": 0, 00:21:23.091 "crdt3": 0 00:21:23.091 } 00:21:23.091 }, 00:21:23.091 { 00:21:23.091 "method": "nvmf_create_transport", 00:21:23.091 "params": { 00:21:23.091 "trtype": "TCP", 00:21:23.091 "max_queue_depth": 128, 00:21:23.091 "max_io_qpairs_per_ctrlr": 127, 00:21:23.091 "in_capsule_data_size": 4096, 00:21:23.091 "max_io_size": 131072, 00:21:23.091 "io_unit_size": 131072, 00:21:23.091 "max_aq_depth": 128, 00:21:23.091 "num_shared_buffers": 511, 00:21:23.091 "buf_cache_size": 4294967295, 00:21:23.091 "dif_insert_or_strip": false, 00:21:23.091 "zcopy": false, 00:21:23.091 "c2h_success": false, 00:21:23.091 "sock_priority": 0, 00:21:23.091 "abort_timeout_sec": 1, 00:21:23.091 "ack_timeout": 0, 00:21:23.091 "data_wr_pool_size": 0 00:21:23.091 } 00:21:23.091 }, 00:21:23.091 { 00:21:23.091 "method": "nvmf_create_subsystem", 00:21:23.091 "params": { 00:21:23.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.091 "allow_any_host": false, 00:21:23.091 "serial_number": "SPDK00000000000001", 00:21:23.091 "model_number": "SPDK bdev Controller", 00:21:23.091 "max_namespaces": 10, 00:21:23.091 "min_cntlid": 1, 00:21:23.091 "max_cntlid": 65519, 00:21:23.091 "ana_reporting": false 00:21:23.091 } 00:21:23.091 }, 00:21:23.091 { 00:21:23.091 "method": "nvmf_subsystem_add_host", 00:21:23.091 "params": { 00:21:23.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.091 "host": "nqn.2016-06.io.spdk:host1", 00:21:23.091 "psk": "/tmp/tmp.UcYOHjkV3h" 00:21:23.091 } 00:21:23.091 }, 00:21:23.091 { 00:21:23.091 "method": "nvmf_subsystem_add_ns", 00:21:23.091 "params": { 00:21:23.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.091 "namespace": { 00:21:23.091 "nsid": 1, 00:21:23.091 "bdev_name": "malloc0", 00:21:23.091 "nguid": "600EF14E61D04C6F964F7C0DD5E3211B", 00:21:23.091 "uuid": "600ef14e-61d0-4c6f-964f-7c0dd5e3211b", 00:21:23.091 "no_auto_visible": false 00:21:23.091 } 00:21:23.091 } 00:21:23.091 }, 00:21:23.091 { 00:21:23.091 "method": "nvmf_subsystem_add_listener", 00:21:23.091 "params": { 00:21:23.091 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:23.091 "listen_address": { 00:21:23.091 "trtype": "TCP", 00:21:23.091 "adrfam": "IPv4", 00:21:23.091 "traddr": "10.0.0.2", 00:21:23.091 "trsvcid": "4420" 00:21:23.091 }, 00:21:23.091 "secure_channel": true 00:21:23.091 } 00:21:23.091 } 00:21:23.091 ] 00:21:23.091 } 00:21:23.091 ] 00:21:23.091 }' 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1330749 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1330749 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1330749 ']' 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.091 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.091 [2024-07-25 10:10:02.216890] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:23.091 [2024-07-25 10:10:02.216943] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.351 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.351 [2024-07-25 10:10:02.299950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.351 [2024-07-25 10:10:02.359654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.351 [2024-07-25 10:10:02.359693] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.351 [2024-07-25 10:10:02.359698] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.351 [2024-07-25 10:10:02.359703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.351 [2024-07-25 10:10:02.359707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.351 [2024-07-25 10:10:02.359755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.612 [2024-07-25 10:10:02.542870] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.612 [2024-07-25 10:10:02.568664] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.612 [2024-07-25 10:10:02.584703] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.612 [2024-07-25 10:10:02.584877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.873 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.873 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:23.873 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.873 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:23.873 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1331003 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1331003 /var/tmp/bdevperf.sock 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1331003 ']' 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:24.135 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:24.135 "subsystems": [ 00:21:24.135 { 00:21:24.135 "subsystem": "keyring", 00:21:24.135 "config": [] 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "subsystem": "iobuf", 00:21:24.135 "config": [ 00:21:24.135 { 00:21:24.135 "method": "iobuf_set_options", 00:21:24.135 "params": { 00:21:24.135 "small_pool_count": 8192, 00:21:24.135 "large_pool_count": 1024, 00:21:24.135 "small_bufsize": 8192, 00:21:24.135 "large_bufsize": 135168 00:21:24.135 } 00:21:24.135 } 00:21:24.135 ] 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "subsystem": "sock", 00:21:24.135 "config": [ 00:21:24.135 { 00:21:24.135 "method": "sock_set_default_impl", 00:21:24.135 "params": { 00:21:24.135 "impl_name": "posix" 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "sock_impl_set_options", 00:21:24.135 "params": { 00:21:24.135 "impl_name": "ssl", 00:21:24.135 "recv_buf_size": 4096, 00:21:24.135 "send_buf_size": 4096, 00:21:24.135 "enable_recv_pipe": true, 00:21:24.135 "enable_quickack": false, 00:21:24.135 "enable_placement_id": 0, 00:21:24.135 "enable_zerocopy_send_server": true, 00:21:24.135 "enable_zerocopy_send_client": false, 00:21:24.135 "zerocopy_threshold": 0, 00:21:24.135 "tls_version": 0, 00:21:24.135 "enable_ktls": false 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "sock_impl_set_options", 00:21:24.135 "params": { 00:21:24.135 "impl_name": "posix", 00:21:24.135 "recv_buf_size": 2097152, 00:21:24.135 "send_buf_size": 2097152, 00:21:24.135 "enable_recv_pipe": true, 00:21:24.135 "enable_quickack": false, 00:21:24.135 "enable_placement_id": 0, 00:21:24.135 "enable_zerocopy_send_server": true, 00:21:24.135 "enable_zerocopy_send_client": false, 00:21:24.135 "zerocopy_threshold": 0, 00:21:24.135 "tls_version": 0, 00:21:24.135 "enable_ktls": false 00:21:24.135 } 00:21:24.135 } 00:21:24.135 ] 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "subsystem": "vmd", 00:21:24.135 "config": [] 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "subsystem": "accel", 00:21:24.135 "config": [ 00:21:24.135 { 00:21:24.135 "method": "accel_set_options", 00:21:24.135 "params": { 00:21:24.135 "small_cache_size": 128, 00:21:24.135 "large_cache_size": 16, 00:21:24.135 "task_count": 2048, 00:21:24.135 "sequence_count": 2048, 00:21:24.135 "buf_count": 2048 00:21:24.135 } 00:21:24.135 } 00:21:24.135 ] 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "subsystem": "bdev", 00:21:24.135 "config": [ 00:21:24.135 { 00:21:24.135 "method": "bdev_set_options", 00:21:24.135 "params": { 00:21:24.135 "bdev_io_pool_size": 65535, 00:21:24.135 "bdev_io_cache_size": 256, 00:21:24.135 "bdev_auto_examine": true, 00:21:24.135 "iobuf_small_cache_size": 128, 00:21:24.135 "iobuf_large_cache_size": 16 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "bdev_raid_set_options", 00:21:24.135 "params": { 00:21:24.135 "process_window_size_kb": 1024, 00:21:24.135 "process_max_bandwidth_mb_sec": 0 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "bdev_iscsi_set_options", 00:21:24.135 "params": { 00:21:24.135 "timeout_sec": 30 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "bdev_nvme_set_options", 00:21:24.135 "params": { 00:21:24.135 "action_on_timeout": "none", 00:21:24.135 "timeout_us": 0, 00:21:24.135 "timeout_admin_us": 0, 00:21:24.135 "keep_alive_timeout_ms": 10000, 00:21:24.135 "arbitration_burst": 0, 00:21:24.135 "low_priority_weight": 0, 00:21:24.135 "medium_priority_weight": 0, 00:21:24.135 "high_priority_weight": 0, 00:21:24.135 "nvme_adminq_poll_period_us": 10000, 00:21:24.135 "nvme_ioq_poll_period_us": 0, 00:21:24.135 "io_queue_requests": 512, 00:21:24.135 "delay_cmd_submit": true, 00:21:24.135 "transport_retry_count": 4, 00:21:24.135 "bdev_retry_count": 3, 00:21:24.135 "transport_ack_timeout": 0, 00:21:24.135 "ctrlr_loss_timeout_sec": 0, 00:21:24.135 "reconnect_delay_sec": 0, 00:21:24.135 "fast_io_fail_timeout_sec": 0, 00:21:24.135 "disable_auto_failback": false, 00:21:24.135 "generate_uuids": false, 00:21:24.135 "transport_tos": 0, 00:21:24.135 "nvme_error_stat": false, 00:21:24.135 "rdma_srq_size": 0, 00:21:24.135 "io_path_stat": false, 00:21:24.135 "allow_accel_sequence": false, 00:21:24.135 "rdma_max_cq_size": 0, 00:21:24.135 "rdma_cm_event_timeout_ms": 0, 00:21:24.135 "dhchap_digests": [ 00:21:24.135 "sha256", 00:21:24.135 "sha384", 00:21:24.135 "sha512" 00:21:24.135 ], 00:21:24.135 "dhchap_dhgroups": [ 00:21:24.135 "null", 00:21:24.135 "ffdhe2048", 00:21:24.135 "ffdhe3072", 00:21:24.135 "ffdhe4096", 00:21:24.135 "ffdhe6144", 00:21:24.135 "ffdhe8192" 00:21:24.135 ] 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "bdev_nvme_attach_controller", 00:21:24.135 "params": { 00:21:24.135 "name": "TLSTEST", 00:21:24.135 "trtype": "TCP", 00:21:24.135 "adrfam": "IPv4", 00:21:24.135 "traddr": "10.0.0.2", 00:21:24.135 "trsvcid": "4420", 00:21:24.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.135 "prchk_reftag": false, 00:21:24.135 "prchk_guard": false, 00:21:24.135 "ctrlr_loss_timeout_sec": 0, 00:21:24.135 "reconnect_delay_sec": 0, 00:21:24.135 "fast_io_fail_timeout_sec": 0, 00:21:24.135 "psk": "/tmp/tmp.UcYOHjkV3h", 00:21:24.135 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.135 "hdgst": false, 00:21:24.135 "ddgst": false 00:21:24.135 } 00:21:24.135 }, 00:21:24.135 { 00:21:24.135 "method": "bdev_nvme_set_hotplug", 00:21:24.135 "params": { 00:21:24.135 "period_us": 100000, 00:21:24.135 "enable": false 00:21:24.135 } 00:21:24.135 }, 00:21:24.136 { 00:21:24.136 "method": "bdev_wait_for_examine" 00:21:24.136 } 00:21:24.136 ] 00:21:24.136 }, 00:21:24.136 { 00:21:24.136 "subsystem": "nbd", 00:21:24.136 "config": [] 00:21:24.136 } 00:21:24.136 ] 00:21:24.136 }' 00:21:24.136 [2024-07-25 10:10:03.072745] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:24.136 [2024-07-25 10:10:03.072797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1331003 ] 00:21:24.136 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.136 [2024-07-25 10:10:03.121679] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.136 [2024-07-25 10:10:03.173973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.397 [2024-07-25 10:10:03.298627] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.397 [2024-07-25 10:10:03.298688] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.969 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.969 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:24.969 10:10:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.969 Running I/O for 10 seconds... 00:21:34.974 00:21:34.974 Latency(us) 00:21:34.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.974 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.974 Verification LBA range: start 0x0 length 0x2000 00:21:34.974 TLSTESTn1 : 10.08 2101.25 8.21 0.00 0.00 60706.34 4942.51 131945.81 00:21:34.974 =================================================================================================================== 00:21:34.974 Total : 2101.25 8.21 0.00 0.00 60706.34 4942.51 131945.81 00:21:34.974 0 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 1331003 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1331003 ']' 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1331003 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1331003 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1331003' 00:21:34.974 killing process with pid 1331003 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1331003 00:21:34.974 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.974 00:21:34.974 Latency(us) 00:21:34.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.974 =================================================================================================================== 00:21:34.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.974 [2024-07-25 10:10:14.073252] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.974 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1331003 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 1330749 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1330749 ']' 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1330749 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1330749 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:35.235 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1330749' 00:21:35.236 killing process with pid 1330749 00:21:35.236 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1330749 00:21:35.236 [2024-07-25 10:10:14.242773] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.236 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1330749 00:21:35.236 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:35.236 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.236 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:35.236 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1333177 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1333177 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1333177 ']' 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.497 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.497 [2024-07-25 10:10:14.419650] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:35.497 [2024-07-25 10:10:14.419705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.497 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.497 [2024-07-25 10:10:14.484151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.497 [2024-07-25 10:10:14.548426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.497 [2024-07-25 10:10:14.548463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.497 [2024-07-25 10:10:14.548470] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.497 [2024-07-25 10:10:14.548477] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.497 [2024-07-25 10:10:14.548482] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.497 [2024-07-25 10:10:14.548501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.076 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:36.077 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:36.077 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.077 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:36.077 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.342 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.342 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.UcYOHjkV3h 00:21:36.342 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UcYOHjkV3h 00:21:36.343 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.343 [2024-07-25 10:10:15.363337] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.343 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.603 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.603 [2024-07-25 10:10:15.696161] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.603 [2024-07-25 10:10:15.696361] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.603 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.865 malloc0 00:21:36.865 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcYOHjkV3h 00:21:37.126 [2024-07-25 10:10:16.184080] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1333594 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1333594 /var/tmp/bdevperf.sock 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1333594 ']' 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:37.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:37.126 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.126 [2024-07-25 10:10:16.249338] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:37.126 [2024-07-25 10:10:16.249390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1333594 ] 00:21:37.387 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.387 [2024-07-25 10:10:16.325906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.387 [2024-07-25 10:10:16.379663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.960 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:37.960 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:37.960 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcYOHjkV3h 00:21:38.221 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:38.221 [2024-07-25 10:10:17.317750] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.482 nvme0n1 00:21:38.482 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:38.482 Running I/O for 1 seconds... 00:21:39.867 00:21:39.867 Latency(us) 00:21:39.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.867 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:39.867 Verification LBA range: start 0x0 length 0x2000 00:21:39.867 nvme0n1 : 1.07 1550.59 6.06 0.00 0.00 80365.51 5133.65 125829.12 00:21:39.867 =================================================================================================================== 00:21:39.867 Total : 1550.59 6.06 0.00 0.00 80365.51 5133.65 125829.12 00:21:39.867 0 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 1333594 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1333594 ']' 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1333594 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333594 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333594' 00:21:39.867 killing process with pid 1333594 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1333594 00:21:39.867 Received shutdown signal, test time was about 1.000000 seconds 00:21:39.867 00:21:39.867 Latency(us) 00:21:39.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.867 =================================================================================================================== 00:21:39.867 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1333594 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 1333177 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1333177 ']' 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1333177 00:21:39.867 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1333177 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1333177' 00:21:39.868 killing process with pid 1333177 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1333177 00:21:39.868 [2024-07-25 10:10:18.826353] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1333177 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1334068 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1334068 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1334068 ']' 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.868 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.129 [2024-07-25 10:10:19.023788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:40.129 [2024-07-25 10:10:19.023840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.129 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.129 [2024-07-25 10:10:19.089350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.129 [2024-07-25 10:10:19.151221] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.129 [2024-07-25 10:10:19.151260] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.129 [2024-07-25 10:10:19.151268] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.129 [2024-07-25 10:10:19.151274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.130 [2024-07-25 10:10:19.151279] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.130 [2024-07-25 10:10:19.151305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.754 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.754 [2024-07-25 10:10:19.841721] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.754 malloc0 00:21:40.754 [2024-07-25 10:10:19.868516] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.754 [2024-07-25 10:10:19.882369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=1334420 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 1334420 /var/tmp/bdevperf.sock 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1334420 ']' 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.015 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.015 [2024-07-25 10:10:19.955849] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:41.015 [2024-07-25 10:10:19.955899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334420 ] 00:21:41.015 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.015 [2024-07-25 10:10:20.031122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.015 [2024-07-25 10:10:20.085262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.587 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.587 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:41.587 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UcYOHjkV3h 00:21:41.848 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:41.848 [2024-07-25 10:10:20.981016] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:42.109 nvme0n1 00:21:42.109 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:42.109 Running I/O for 1 seconds... 00:21:43.496 00:21:43.496 Latency(us) 00:21:43.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.496 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:43.496 Verification LBA range: start 0x0 length 0x2000 00:21:43.496 nvme0n1 : 1.07 1801.11 7.04 0.00 0.00 68978.12 6089.39 145926.83 00:21:43.496 =================================================================================================================== 00:21:43.496 Total : 1801.11 7.04 0.00 0.00 68978.12 6089.39 145926.83 00:21:43.496 0 00:21:43.496 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:43.496 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.496 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.496 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.496 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:43.496 "subsystems": [ 00:21:43.496 { 00:21:43.496 "subsystem": "keyring", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "keyring_file_add_key", 00:21:43.496 "params": { 00:21:43.496 "name": "key0", 00:21:43.496 "path": "/tmp/tmp.UcYOHjkV3h" 00:21:43.496 } 00:21:43.496 } 00:21:43.496 ] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "iobuf", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "iobuf_set_options", 00:21:43.496 "params": { 00:21:43.496 "small_pool_count": 8192, 00:21:43.496 "large_pool_count": 1024, 00:21:43.496 "small_bufsize": 8192, 00:21:43.496 "large_bufsize": 135168 00:21:43.496 } 00:21:43.496 } 00:21:43.496 ] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "sock", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "sock_set_default_impl", 00:21:43.496 "params": { 00:21:43.496 "impl_name": "posix" 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "sock_impl_set_options", 00:21:43.496 "params": { 00:21:43.496 "impl_name": "ssl", 00:21:43.496 "recv_buf_size": 4096, 00:21:43.496 "send_buf_size": 4096, 00:21:43.496 "enable_recv_pipe": true, 00:21:43.496 "enable_quickack": false, 00:21:43.496 "enable_placement_id": 0, 00:21:43.496 "enable_zerocopy_send_server": true, 00:21:43.496 "enable_zerocopy_send_client": false, 00:21:43.496 "zerocopy_threshold": 0, 00:21:43.496 "tls_version": 0, 00:21:43.496 "enable_ktls": false 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "sock_impl_set_options", 00:21:43.496 "params": { 00:21:43.496 "impl_name": "posix", 00:21:43.496 "recv_buf_size": 2097152, 00:21:43.496 "send_buf_size": 2097152, 00:21:43.496 "enable_recv_pipe": true, 00:21:43.496 "enable_quickack": false, 00:21:43.496 "enable_placement_id": 0, 00:21:43.496 "enable_zerocopy_send_server": true, 00:21:43.496 "enable_zerocopy_send_client": false, 00:21:43.496 "zerocopy_threshold": 0, 00:21:43.496 "tls_version": 0, 00:21:43.496 "enable_ktls": false 00:21:43.496 } 00:21:43.496 } 00:21:43.496 ] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "vmd", 00:21:43.496 "config": [] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "accel", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "accel_set_options", 00:21:43.496 "params": { 00:21:43.496 "small_cache_size": 128, 00:21:43.496 "large_cache_size": 16, 00:21:43.496 "task_count": 2048, 00:21:43.496 "sequence_count": 2048, 00:21:43.496 "buf_count": 2048 00:21:43.496 } 00:21:43.496 } 00:21:43.496 ] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "bdev", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "bdev_set_options", 00:21:43.496 "params": { 00:21:43.496 "bdev_io_pool_size": 65535, 00:21:43.496 "bdev_io_cache_size": 256, 00:21:43.496 "bdev_auto_examine": true, 00:21:43.496 "iobuf_small_cache_size": 128, 00:21:43.496 "iobuf_large_cache_size": 16 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "bdev_raid_set_options", 00:21:43.496 "params": { 00:21:43.496 "process_window_size_kb": 1024, 00:21:43.496 "process_max_bandwidth_mb_sec": 0 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "bdev_iscsi_set_options", 00:21:43.496 "params": { 00:21:43.496 "timeout_sec": 30 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "bdev_nvme_set_options", 00:21:43.496 "params": { 00:21:43.496 "action_on_timeout": "none", 00:21:43.496 "timeout_us": 0, 00:21:43.496 "timeout_admin_us": 0, 00:21:43.496 "keep_alive_timeout_ms": 10000, 00:21:43.496 "arbitration_burst": 0, 00:21:43.496 "low_priority_weight": 0, 00:21:43.496 "medium_priority_weight": 0, 00:21:43.496 "high_priority_weight": 0, 00:21:43.496 "nvme_adminq_poll_period_us": 10000, 00:21:43.496 "nvme_ioq_poll_period_us": 0, 00:21:43.496 "io_queue_requests": 0, 00:21:43.496 "delay_cmd_submit": true, 00:21:43.496 "transport_retry_count": 4, 00:21:43.496 "bdev_retry_count": 3, 00:21:43.496 "transport_ack_timeout": 0, 00:21:43.496 "ctrlr_loss_timeout_sec": 0, 00:21:43.496 "reconnect_delay_sec": 0, 00:21:43.496 "fast_io_fail_timeout_sec": 0, 00:21:43.496 "disable_auto_failback": false, 00:21:43.496 "generate_uuids": false, 00:21:43.496 "transport_tos": 0, 00:21:43.496 "nvme_error_stat": false, 00:21:43.496 "rdma_srq_size": 0, 00:21:43.496 "io_path_stat": false, 00:21:43.496 "allow_accel_sequence": false, 00:21:43.496 "rdma_max_cq_size": 0, 00:21:43.496 "rdma_cm_event_timeout_ms": 0, 00:21:43.496 "dhchap_digests": [ 00:21:43.496 "sha256", 00:21:43.496 "sha384", 00:21:43.496 "sha512" 00:21:43.496 ], 00:21:43.496 "dhchap_dhgroups": [ 00:21:43.496 "null", 00:21:43.496 "ffdhe2048", 00:21:43.496 "ffdhe3072", 00:21:43.496 "ffdhe4096", 00:21:43.496 "ffdhe6144", 00:21:43.496 "ffdhe8192" 00:21:43.496 ] 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "bdev_nvme_set_hotplug", 00:21:43.496 "params": { 00:21:43.496 "period_us": 100000, 00:21:43.496 "enable": false 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "bdev_malloc_create", 00:21:43.496 "params": { 00:21:43.496 "name": "malloc0", 00:21:43.496 "num_blocks": 8192, 00:21:43.496 "block_size": 4096, 00:21:43.496 "physical_block_size": 4096, 00:21:43.496 "uuid": "a8d8efc7-4bc1-4212-8c5f-f87414c43ee0", 00:21:43.496 "optimal_io_boundary": 0, 00:21:43.496 "md_size": 0, 00:21:43.496 "dif_type": 0, 00:21:43.496 "dif_is_head_of_md": false, 00:21:43.496 "dif_pi_format": 0 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "bdev_wait_for_examine" 00:21:43.496 } 00:21:43.496 ] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "nbd", 00:21:43.496 "config": [] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "scheduler", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "framework_set_scheduler", 00:21:43.496 "params": { 00:21:43.496 "name": "static" 00:21:43.496 } 00:21:43.496 } 00:21:43.496 ] 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "subsystem": "nvmf", 00:21:43.496 "config": [ 00:21:43.496 { 00:21:43.496 "method": "nvmf_set_config", 00:21:43.496 "params": { 00:21:43.496 "discovery_filter": "match_any", 00:21:43.496 "admin_cmd_passthru": { 00:21:43.496 "identify_ctrlr": false 00:21:43.496 } 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "nvmf_set_max_subsystems", 00:21:43.496 "params": { 00:21:43.496 "max_subsystems": 1024 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "nvmf_set_crdt", 00:21:43.496 "params": { 00:21:43.496 "crdt1": 0, 00:21:43.496 "crdt2": 0, 00:21:43.496 "crdt3": 0 00:21:43.496 } 00:21:43.496 }, 00:21:43.496 { 00:21:43.496 "method": "nvmf_create_transport", 00:21:43.496 "params": { 00:21:43.496 "trtype": "TCP", 00:21:43.496 "max_queue_depth": 128, 00:21:43.496 "max_io_qpairs_per_ctrlr": 127, 00:21:43.496 "in_capsule_data_size": 4096, 00:21:43.496 "max_io_size": 131072, 00:21:43.496 "io_unit_size": 131072, 00:21:43.496 "max_aq_depth": 128, 00:21:43.496 "num_shared_buffers": 511, 00:21:43.497 "buf_cache_size": 4294967295, 00:21:43.497 "dif_insert_or_strip": false, 00:21:43.497 "zcopy": false, 00:21:43.497 "c2h_success": false, 00:21:43.497 "sock_priority": 0, 00:21:43.497 "abort_timeout_sec": 1, 00:21:43.497 "ack_timeout": 0, 00:21:43.497 "data_wr_pool_size": 0 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "nvmf_create_subsystem", 00:21:43.497 "params": { 00:21:43.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.497 "allow_any_host": false, 00:21:43.497 "serial_number": "00000000000000000000", 00:21:43.497 "model_number": "SPDK bdev Controller", 00:21:43.497 "max_namespaces": 32, 00:21:43.497 "min_cntlid": 1, 00:21:43.497 "max_cntlid": 65519, 00:21:43.497 "ana_reporting": false 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "nvmf_subsystem_add_host", 00:21:43.497 "params": { 00:21:43.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.497 "host": "nqn.2016-06.io.spdk:host1", 00:21:43.497 "psk": "key0" 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "nvmf_subsystem_add_ns", 00:21:43.497 "params": { 00:21:43.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.497 "namespace": { 00:21:43.497 "nsid": 1, 00:21:43.497 "bdev_name": "malloc0", 00:21:43.497 "nguid": "A8D8EFC74BC142128C5FF87414C43EE0", 00:21:43.497 "uuid": "a8d8efc7-4bc1-4212-8c5f-f87414c43ee0", 00:21:43.497 "no_auto_visible": false 00:21:43.497 } 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "nvmf_subsystem_add_listener", 00:21:43.497 "params": { 00:21:43.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.497 "listen_address": { 00:21:43.497 "trtype": "TCP", 00:21:43.497 "adrfam": "IPv4", 00:21:43.497 "traddr": "10.0.0.2", 00:21:43.497 "trsvcid": "4420" 00:21:43.497 }, 00:21:43.497 "secure_channel": false, 00:21:43.497 "sock_impl": "ssl" 00:21:43.497 } 00:21:43.497 } 00:21:43.497 ] 00:21:43.497 } 00:21:43.497 ] 00:21:43.497 }' 00:21:43.497 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:43.497 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:43.497 "subsystems": [ 00:21:43.497 { 00:21:43.497 "subsystem": "keyring", 00:21:43.497 "config": [ 00:21:43.497 { 00:21:43.497 "method": "keyring_file_add_key", 00:21:43.497 "params": { 00:21:43.497 "name": "key0", 00:21:43.497 "path": "/tmp/tmp.UcYOHjkV3h" 00:21:43.497 } 00:21:43.497 } 00:21:43.497 ] 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "subsystem": "iobuf", 00:21:43.497 "config": [ 00:21:43.497 { 00:21:43.497 "method": "iobuf_set_options", 00:21:43.497 "params": { 00:21:43.497 "small_pool_count": 8192, 00:21:43.497 "large_pool_count": 1024, 00:21:43.497 "small_bufsize": 8192, 00:21:43.497 "large_bufsize": 135168 00:21:43.497 } 00:21:43.497 } 00:21:43.497 ] 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "subsystem": "sock", 00:21:43.497 "config": [ 00:21:43.497 { 00:21:43.497 "method": "sock_set_default_impl", 00:21:43.497 "params": { 00:21:43.497 "impl_name": "posix" 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "sock_impl_set_options", 00:21:43.497 "params": { 00:21:43.497 "impl_name": "ssl", 00:21:43.497 "recv_buf_size": 4096, 00:21:43.497 "send_buf_size": 4096, 00:21:43.497 "enable_recv_pipe": true, 00:21:43.497 "enable_quickack": false, 00:21:43.497 "enable_placement_id": 0, 00:21:43.497 "enable_zerocopy_send_server": true, 00:21:43.497 "enable_zerocopy_send_client": false, 00:21:43.497 "zerocopy_threshold": 0, 00:21:43.497 "tls_version": 0, 00:21:43.497 "enable_ktls": false 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "sock_impl_set_options", 00:21:43.497 "params": { 00:21:43.497 "impl_name": "posix", 00:21:43.497 "recv_buf_size": 2097152, 00:21:43.497 "send_buf_size": 2097152, 00:21:43.497 "enable_recv_pipe": true, 00:21:43.497 "enable_quickack": false, 00:21:43.497 "enable_placement_id": 0, 00:21:43.497 "enable_zerocopy_send_server": true, 00:21:43.497 "enable_zerocopy_send_client": false, 00:21:43.497 "zerocopy_threshold": 0, 00:21:43.497 "tls_version": 0, 00:21:43.497 "enable_ktls": false 00:21:43.497 } 00:21:43.497 } 00:21:43.497 ] 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "subsystem": "vmd", 00:21:43.497 "config": [] 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "subsystem": "accel", 00:21:43.497 "config": [ 00:21:43.497 { 00:21:43.497 "method": "accel_set_options", 00:21:43.497 "params": { 00:21:43.497 "small_cache_size": 128, 00:21:43.497 "large_cache_size": 16, 00:21:43.497 "task_count": 2048, 00:21:43.497 "sequence_count": 2048, 00:21:43.497 "buf_count": 2048 00:21:43.497 } 00:21:43.497 } 00:21:43.497 ] 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "subsystem": "bdev", 00:21:43.497 "config": [ 00:21:43.497 { 00:21:43.497 "method": "bdev_set_options", 00:21:43.497 "params": { 00:21:43.497 "bdev_io_pool_size": 65535, 00:21:43.497 "bdev_io_cache_size": 256, 00:21:43.497 "bdev_auto_examine": true, 00:21:43.497 "iobuf_small_cache_size": 128, 00:21:43.497 "iobuf_large_cache_size": 16 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "bdev_raid_set_options", 00:21:43.497 "params": { 00:21:43.497 "process_window_size_kb": 1024, 00:21:43.497 "process_max_bandwidth_mb_sec": 0 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "bdev_iscsi_set_options", 00:21:43.497 "params": { 00:21:43.497 "timeout_sec": 30 00:21:43.497 } 00:21:43.497 }, 00:21:43.497 { 00:21:43.497 "method": "bdev_nvme_set_options", 00:21:43.497 "params": { 00:21:43.497 "action_on_timeout": "none", 00:21:43.497 "timeout_us": 0, 00:21:43.497 "timeout_admin_us": 0, 00:21:43.497 "keep_alive_timeout_ms": 10000, 00:21:43.497 "arbitration_burst": 0, 00:21:43.497 "low_priority_weight": 0, 00:21:43.497 "medium_priority_weight": 0, 00:21:43.497 "high_priority_weight": 0, 00:21:43.497 "nvme_adminq_poll_period_us": 10000, 00:21:43.497 "nvme_ioq_poll_period_us": 0, 00:21:43.497 "io_queue_requests": 512, 00:21:43.497 "delay_cmd_submit": true, 00:21:43.497 "transport_retry_count": 4, 00:21:43.497 "bdev_retry_count": 3, 00:21:43.497 "transport_ack_timeout": 0, 00:21:43.497 "ctrlr_loss_timeout_sec": 0, 00:21:43.497 "reconnect_delay_sec": 0, 00:21:43.497 "fast_io_fail_timeout_sec": 0, 00:21:43.497 "disable_auto_failback": false, 00:21:43.497 "generate_uuids": false, 00:21:43.497 "transport_tos": 0, 00:21:43.497 "nvme_error_stat": false, 00:21:43.497 "rdma_srq_size": 0, 00:21:43.497 "io_path_stat": false, 00:21:43.497 "allow_accel_sequence": false, 00:21:43.497 "rdma_max_cq_size": 0, 00:21:43.497 "rdma_cm_event_timeout_ms": 0, 00:21:43.497 "dhchap_digests": [ 00:21:43.497 "sha256", 00:21:43.497 "sha384", 00:21:43.497 "sha512" 00:21:43.497 ], 00:21:43.497 "dhchap_dhgroups": [ 00:21:43.498 "null", 00:21:43.498 "ffdhe2048", 00:21:43.498 "ffdhe3072", 00:21:43.498 "ffdhe4096", 00:21:43.498 "ffdhe6144", 00:21:43.498 "ffdhe8192" 00:21:43.498 ] 00:21:43.498 } 00:21:43.498 }, 00:21:43.498 { 00:21:43.498 "method": "bdev_nvme_attach_controller", 00:21:43.498 "params": { 00:21:43.498 "name": "nvme0", 00:21:43.498 "trtype": "TCP", 00:21:43.498 "adrfam": "IPv4", 00:21:43.498 "traddr": "10.0.0.2", 00:21:43.498 "trsvcid": "4420", 00:21:43.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.498 "prchk_reftag": false, 00:21:43.498 "prchk_guard": false, 00:21:43.498 "ctrlr_loss_timeout_sec": 0, 00:21:43.498 "reconnect_delay_sec": 0, 00:21:43.498 "fast_io_fail_timeout_sec": 0, 00:21:43.498 "psk": "key0", 00:21:43.498 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.498 "hdgst": false, 00:21:43.498 "ddgst": false 00:21:43.498 } 00:21:43.498 }, 00:21:43.498 { 00:21:43.498 "method": "bdev_nvme_set_hotplug", 00:21:43.498 "params": { 00:21:43.498 "period_us": 100000, 00:21:43.498 "enable": false 00:21:43.498 } 00:21:43.498 }, 00:21:43.498 { 00:21:43.498 "method": "bdev_enable_histogram", 00:21:43.498 "params": { 00:21:43.498 "name": "nvme0n1", 00:21:43.498 "enable": true 00:21:43.498 } 00:21:43.498 }, 00:21:43.498 { 00:21:43.498 "method": "bdev_wait_for_examine" 00:21:43.498 } 00:21:43.498 ] 00:21:43.498 }, 00:21:43.498 { 00:21:43.498 "subsystem": "nbd", 00:21:43.498 "config": [] 00:21:43.498 } 00:21:43.498 ] 00:21:43.498 }' 00:21:43.498 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 1334420 00:21:43.498 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1334420 ']' 00:21:43.498 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1334420 00:21:43.498 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:43.498 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.498 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334420 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334420' 00:21:43.759 killing process with pid 1334420 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1334420 00:21:43.759 Received shutdown signal, test time was about 1.000000 seconds 00:21:43.759 00:21:43.759 Latency(us) 00:21:43.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.759 =================================================================================================================== 00:21:43.759 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1334420 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 1334068 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1334068 ']' 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1334068 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334068 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334068' 00:21:43.759 killing process with pid 1334068 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1334068 00:21:43.759 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1334068 00:21:44.021 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:44.021 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:44.021 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:44.021 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:44.021 "subsystems": [ 00:21:44.021 { 00:21:44.021 "subsystem": "keyring", 00:21:44.021 "config": [ 00:21:44.021 { 00:21:44.021 "method": "keyring_file_add_key", 00:21:44.021 "params": { 00:21:44.021 "name": "key0", 00:21:44.021 "path": "/tmp/tmp.UcYOHjkV3h" 00:21:44.021 } 00:21:44.021 } 00:21:44.021 ] 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "subsystem": "iobuf", 00:21:44.021 "config": [ 00:21:44.021 { 00:21:44.021 "method": "iobuf_set_options", 00:21:44.021 "params": { 00:21:44.021 "small_pool_count": 8192, 00:21:44.021 "large_pool_count": 1024, 00:21:44.021 "small_bufsize": 8192, 00:21:44.021 "large_bufsize": 135168 00:21:44.021 } 00:21:44.021 } 00:21:44.021 ] 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "subsystem": "sock", 00:21:44.021 "config": [ 00:21:44.021 { 00:21:44.021 "method": "sock_set_default_impl", 00:21:44.021 "params": { 00:21:44.021 "impl_name": "posix" 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "sock_impl_set_options", 00:21:44.021 "params": { 00:21:44.021 "impl_name": "ssl", 00:21:44.021 "recv_buf_size": 4096, 00:21:44.021 "send_buf_size": 4096, 00:21:44.021 "enable_recv_pipe": true, 00:21:44.021 "enable_quickack": false, 00:21:44.021 "enable_placement_id": 0, 00:21:44.021 "enable_zerocopy_send_server": true, 00:21:44.021 "enable_zerocopy_send_client": false, 00:21:44.021 "zerocopy_threshold": 0, 00:21:44.021 "tls_version": 0, 00:21:44.021 "enable_ktls": false 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "sock_impl_set_options", 00:21:44.021 "params": { 00:21:44.021 "impl_name": "posix", 00:21:44.021 "recv_buf_size": 2097152, 00:21:44.021 "send_buf_size": 2097152, 00:21:44.021 "enable_recv_pipe": true, 00:21:44.021 "enable_quickack": false, 00:21:44.021 "enable_placement_id": 0, 00:21:44.021 "enable_zerocopy_send_server": true, 00:21:44.021 "enable_zerocopy_send_client": false, 00:21:44.021 "zerocopy_threshold": 0, 00:21:44.021 "tls_version": 0, 00:21:44.021 "enable_ktls": false 00:21:44.021 } 00:21:44.021 } 00:21:44.021 ] 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "subsystem": "vmd", 00:21:44.021 "config": [] 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "subsystem": "accel", 00:21:44.021 "config": [ 00:21:44.021 { 00:21:44.021 "method": "accel_set_options", 00:21:44.021 "params": { 00:21:44.021 "small_cache_size": 128, 00:21:44.021 "large_cache_size": 16, 00:21:44.021 "task_count": 2048, 00:21:44.021 "sequence_count": 2048, 00:21:44.021 "buf_count": 2048 00:21:44.021 } 00:21:44.021 } 00:21:44.021 ] 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "subsystem": "bdev", 00:21:44.021 "config": [ 00:21:44.021 { 00:21:44.021 "method": "bdev_set_options", 00:21:44.021 "params": { 00:21:44.021 "bdev_io_pool_size": 65535, 00:21:44.021 "bdev_io_cache_size": 256, 00:21:44.021 "bdev_auto_examine": true, 00:21:44.021 "iobuf_small_cache_size": 128, 00:21:44.021 "iobuf_large_cache_size": 16 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "bdev_raid_set_options", 00:21:44.021 "params": { 00:21:44.021 "process_window_size_kb": 1024, 00:21:44.021 "process_max_bandwidth_mb_sec": 0 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "bdev_iscsi_set_options", 00:21:44.021 "params": { 00:21:44.021 "timeout_sec": 30 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "bdev_nvme_set_options", 00:21:44.021 "params": { 00:21:44.021 "action_on_timeout": "none", 00:21:44.021 "timeout_us": 0, 00:21:44.021 "timeout_admin_us": 0, 00:21:44.021 "keep_alive_timeout_ms": 10000, 00:21:44.021 "arbitration_burst": 0, 00:21:44.021 "low_priority_weight": 0, 00:21:44.021 "medium_priority_weight": 0, 00:21:44.021 "high_priority_weight": 0, 00:21:44.021 "nvme_adminq_poll_period_us": 10000, 00:21:44.021 "nvme_ioq_poll_period_us": 0, 00:21:44.021 "io_queue_requests": 0, 00:21:44.021 "delay_cmd_submit": true, 00:21:44.021 "transport_retry_count": 4, 00:21:44.021 "bdev_retry_count": 3, 00:21:44.021 "transport_ack_timeout": 0, 00:21:44.021 "ctrlr_loss_timeout_sec": 0, 00:21:44.021 "reconnect_delay_sec": 0, 00:21:44.021 "fast_io_fail_timeout_sec": 0, 00:21:44.021 "disable_auto_failback": false, 00:21:44.021 "generate_uuids": false, 00:21:44.021 "transport_tos": 0, 00:21:44.021 "nvme_error_stat": false, 00:21:44.021 "rdma_srq_size": 0, 00:21:44.021 "io_path_stat": false, 00:21:44.021 "allow_accel_sequence": false, 00:21:44.021 "rdma_max_cq_size": 0, 00:21:44.021 "rdma_cm_event_timeout_ms": 0, 00:21:44.021 "dhchap_digests": [ 00:21:44.021 "sha256", 00:21:44.021 "sha384", 00:21:44.021 "sha512" 00:21:44.021 ], 00:21:44.021 "dhchap_dhgroups": [ 00:21:44.021 "null", 00:21:44.021 "ffdhe2048", 00:21:44.021 "ffdhe3072", 00:21:44.021 "ffdhe4096", 00:21:44.021 "ffdhe6144", 00:21:44.021 "ffdhe8192" 00:21:44.021 ] 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "bdev_nvme_set_hotplug", 00:21:44.021 "params": { 00:21:44.021 "period_us": 100000, 00:21:44.021 "enable": false 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "bdev_malloc_create", 00:21:44.021 "params": { 00:21:44.021 "name": "malloc0", 00:21:44.021 "num_blocks": 8192, 00:21:44.021 "block_size": 4096, 00:21:44.021 "physical_block_size": 4096, 00:21:44.021 "uuid": "a8d8efc7-4bc1-4212-8c5f-f87414c43ee0", 00:21:44.021 "optimal_io_boundary": 0, 00:21:44.021 "md_size": 0, 00:21:44.021 "dif_type": 0, 00:21:44.021 "dif_is_head_of_md": false, 00:21:44.021 "dif_pi_format": 0 00:21:44.021 } 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "method": "bdev_wait_for_examine" 00:21:44.021 } 00:21:44.021 ] 00:21:44.021 }, 00:21:44.021 { 00:21:44.021 "subsystem": "nbd", 00:21:44.021 "config": [] 00:21:44.021 }, 00:21:44.022 { 00:21:44.022 "subsystem": "scheduler", 00:21:44.022 "config": [ 00:21:44.022 { 00:21:44.022 "method": "framework_set_scheduler", 00:21:44.022 "params": { 00:21:44.022 "name": "static" 00:21:44.022 } 00:21:44.022 } 00:21:44.022 ] 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "subsystem": "nvmf", 00:21:44.022 "config": [ 00:21:44.022 { 00:21:44.022 "method": "nvmf_set_config", 00:21:44.022 "params": { 00:21:44.022 "discovery_filter": "match_any", 00:21:44.022 "admin_cmd_passthru": { 00:21:44.022 "identify_ctrlr": false 00:21:44.022 } 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_set_max_subsystems", 00:21:44.022 "params": { 00:21:44.022 "max_subsystems": 1024 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_set_crdt", 00:21:44.022 "params": { 00:21:44.022 "crdt1": 0, 00:21:44.022 "crdt2": 0, 00:21:44.022 "crdt3": 0 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_create_transport", 00:21:44.022 "params": { 00:21:44.022 "trtype": "TCP", 00:21:44.022 "max_queue_depth": 128, 00:21:44.022 "max_io_qpairs_per_ctrlr": 127, 00:21:44.022 "in_capsule_data_size": 4096, 00:21:44.022 "max_io_size": 131072, 00:21:44.022 "io_unit_size": 131072, 00:21:44.022 "max_aq_depth": 128, 00:21:44.022 "num_shared_buffers": 511, 00:21:44.022 "buf_cache_size": 4294967295, 00:21:44.022 "dif_insert_or_strip": false, 00:21:44.022 "zcopy": false, 00:21:44.022 "c2h_success": false, 00:21:44.022 "sock_priority": 0, 00:21:44.022 "abort_timeout_sec": 1, 00:21:44.022 "ack_timeout": 0, 00:21:44.022 "data_wr_pool_size": 0 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_create_subsystem", 00:21:44.022 "params": { 00:21:44.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.022 "allow_any_host": false, 00:21:44.022 "serial_number": "00000000000000000000", 00:21:44.022 "model_number": "SPDK bdev Controller", 00:21:44.022 "max_namespaces": 32, 00:21:44.022 "min_cntlid": 1, 00:21:44.022 "max_cntlid": 65519, 00:21:44.022 "ana_reporting": false 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_subsystem_add_host", 00:21:44.022 "params": { 00:21:44.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.022 "host": "nqn.2016-06.io.spdk:host1", 00:21:44.022 "psk": "key0" 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_subsystem_add_ns", 00:21:44.022 "params": { 00:21:44.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.022 "namespace": { 00:21:44.022 "nsid": 1, 00:21:44.022 "bdev_name": "malloc0", 00:21:44.022 "nguid": "A8D8EFC74BC142128C5FF87414C43EE0", 00:21:44.022 "uuid": "a8d8efc7-4bc1-4212-8c5f-f87414c43ee0", 00:21:44.022 "no_auto_visible": false 00:21:44.022 } 00:21:44.022 } 00:21:44.022 }, 00:21:44.022 { 00:21:44.022 "method": "nvmf_subsystem_add_listener", 00:21:44.022 "params": { 00:21:44.022 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.022 "listen_address": { 00:21:44.022 "trtype": "TCP", 00:21:44.022 "adrfam": "IPv4", 00:21:44.022 "traddr": "10.0.0.2", 00:21:44.022 "trsvcid": "4420" 00:21:44.022 }, 00:21:44.022 "secure_channel": false, 00:21:44.022 "sock_impl": "ssl" 00:21:44.022 } 00:21:44.022 } 00:21:44.022 ] 00:21:44.022 } 00:21:44.022 ] 00:21:44.022 }' 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1334940 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1334940 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1334940 ']' 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.022 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.022 [2024-07-25 10:10:23.044692] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:44.022 [2024-07-25 10:10:23.044752] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:44.022 EAL: No free 2048 kB hugepages reported on node 1 00:21:44.022 [2024-07-25 10:10:23.109497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.283 [2024-07-25 10:10:23.174938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:44.283 [2024-07-25 10:10:23.174975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:44.283 [2024-07-25 10:10:23.174983] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:44.283 [2024-07-25 10:10:23.174989] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:44.283 [2024-07-25 10:10:23.174995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:44.283 [2024-07-25 10:10:23.175043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.283 [2024-07-25 10:10:23.372461] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:44.543 [2024-07-25 10:10:23.419135] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:44.543 [2024-07-25 10:10:23.419348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=1335134 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 1335134 /var/tmp/bdevperf.sock 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1335134 ']' 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.804 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:44.804 "subsystems": [ 00:21:44.804 { 00:21:44.804 "subsystem": "keyring", 00:21:44.804 "config": [ 00:21:44.804 { 00:21:44.804 "method": "keyring_file_add_key", 00:21:44.804 "params": { 00:21:44.804 "name": "key0", 00:21:44.804 "path": "/tmp/tmp.UcYOHjkV3h" 00:21:44.804 } 00:21:44.804 } 00:21:44.804 ] 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "subsystem": "iobuf", 00:21:44.804 "config": [ 00:21:44.804 { 00:21:44.804 "method": "iobuf_set_options", 00:21:44.804 "params": { 00:21:44.804 "small_pool_count": 8192, 00:21:44.804 "large_pool_count": 1024, 00:21:44.804 "small_bufsize": 8192, 00:21:44.804 "large_bufsize": 135168 00:21:44.804 } 00:21:44.804 } 00:21:44.804 ] 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "subsystem": "sock", 00:21:44.804 "config": [ 00:21:44.804 { 00:21:44.804 "method": "sock_set_default_impl", 00:21:44.804 "params": { 00:21:44.804 "impl_name": "posix" 00:21:44.804 } 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "method": "sock_impl_set_options", 00:21:44.804 "params": { 00:21:44.804 "impl_name": "ssl", 00:21:44.804 "recv_buf_size": 4096, 00:21:44.804 "send_buf_size": 4096, 00:21:44.804 "enable_recv_pipe": true, 00:21:44.804 "enable_quickack": false, 00:21:44.804 "enable_placement_id": 0, 00:21:44.804 "enable_zerocopy_send_server": true, 00:21:44.804 "enable_zerocopy_send_client": false, 00:21:44.804 "zerocopy_threshold": 0, 00:21:44.804 "tls_version": 0, 00:21:44.804 "enable_ktls": false 00:21:44.804 } 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "method": "sock_impl_set_options", 00:21:44.804 "params": { 00:21:44.804 "impl_name": "posix", 00:21:44.804 "recv_buf_size": 2097152, 00:21:44.804 "send_buf_size": 2097152, 00:21:44.804 "enable_recv_pipe": true, 00:21:44.804 "enable_quickack": false, 00:21:44.804 "enable_placement_id": 0, 00:21:44.804 "enable_zerocopy_send_server": true, 00:21:44.804 "enable_zerocopy_send_client": false, 00:21:44.804 "zerocopy_threshold": 0, 00:21:44.804 "tls_version": 0, 00:21:44.804 "enable_ktls": false 00:21:44.804 } 00:21:44.804 } 00:21:44.804 ] 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "subsystem": "vmd", 00:21:44.804 "config": [] 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "subsystem": "accel", 00:21:44.804 "config": [ 00:21:44.804 { 00:21:44.804 "method": "accel_set_options", 00:21:44.804 "params": { 00:21:44.804 "small_cache_size": 128, 00:21:44.804 "large_cache_size": 16, 00:21:44.804 "task_count": 2048, 00:21:44.804 "sequence_count": 2048, 00:21:44.804 "buf_count": 2048 00:21:44.804 } 00:21:44.804 } 00:21:44.804 ] 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "subsystem": "bdev", 00:21:44.804 "config": [ 00:21:44.804 { 00:21:44.804 "method": "bdev_set_options", 00:21:44.804 "params": { 00:21:44.804 "bdev_io_pool_size": 65535, 00:21:44.804 "bdev_io_cache_size": 256, 00:21:44.804 "bdev_auto_examine": true, 00:21:44.804 "iobuf_small_cache_size": 128, 00:21:44.804 "iobuf_large_cache_size": 16 00:21:44.804 } 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "method": "bdev_raid_set_options", 00:21:44.804 "params": { 00:21:44.804 "process_window_size_kb": 1024, 00:21:44.804 "process_max_bandwidth_mb_sec": 0 00:21:44.804 } 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "method": "bdev_iscsi_set_options", 00:21:44.804 "params": { 00:21:44.804 "timeout_sec": 30 00:21:44.804 } 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "method": "bdev_nvme_set_options", 00:21:44.804 "params": { 00:21:44.804 "action_on_timeout": "none", 00:21:44.804 "timeout_us": 0, 00:21:44.804 "timeout_admin_us": 0, 00:21:44.804 "keep_alive_timeout_ms": 10000, 00:21:44.804 "arbitration_burst": 0, 00:21:44.804 "low_priority_weight": 0, 00:21:44.804 "medium_priority_weight": 0, 00:21:44.804 "high_priority_weight": 0, 00:21:44.804 "nvme_adminq_poll_period_us": 10000, 00:21:44.804 "nvme_ioq_poll_period_us": 0, 00:21:44.804 "io_queue_requests": 512, 00:21:44.804 "delay_cmd_submit": true, 00:21:44.804 "transport_retry_count": 4, 00:21:44.804 "bdev_retry_count": 3, 00:21:44.804 "transport_ack_timeout": 0, 00:21:44.804 "ctrlr_loss_timeout_sec": 0, 00:21:44.804 "reconnect_delay_sec": 0, 00:21:44.804 "fast_io_fail_timeout_sec": 0, 00:21:44.804 "disable_auto_failback": false, 00:21:44.804 "generate_uuids": false, 00:21:44.804 "transport_tos": 0, 00:21:44.804 "nvme_error_stat": false, 00:21:44.804 "rdma_srq_size": 0, 00:21:44.804 "io_path_stat": false, 00:21:44.804 "allow_accel_sequence": false, 00:21:44.804 "rdma_max_cq_size": 0, 00:21:44.804 "rdma_cm_event_timeout_ms": 0, 00:21:44.804 "dhchap_digests": [ 00:21:44.804 "sha256", 00:21:44.804 "sha384", 00:21:44.804 "sha512" 00:21:44.804 ], 00:21:44.804 "dhchap_dhgroups": [ 00:21:44.804 "null", 00:21:44.804 "ffdhe2048", 00:21:44.804 "ffdhe3072", 00:21:44.804 "ffdhe4096", 00:21:44.804 "ffdhe6144", 00:21:44.804 "ffdhe8192" 00:21:44.804 ] 00:21:44.804 } 00:21:44.804 }, 00:21:44.804 { 00:21:44.804 "method": "bdev_nvme_attach_controller", 00:21:44.804 "params": { 00:21:44.804 "name": "nvme0", 00:21:44.805 "trtype": "TCP", 00:21:44.805 "adrfam": "IPv4", 00:21:44.805 "traddr": "10.0.0.2", 00:21:44.805 "trsvcid": "4420", 00:21:44.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:44.805 "prchk_reftag": false, 00:21:44.805 "prchk_guard": false, 00:21:44.805 "ctrlr_loss_timeout_sec": 0, 00:21:44.805 "reconnect_delay_sec": 0, 00:21:44.805 "fast_io_fail_timeout_sec": 0, 00:21:44.805 "psk": "key0", 00:21:44.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:44.805 "hdgst": false, 00:21:44.805 "ddgst": false 00:21:44.805 } 00:21:44.805 }, 00:21:44.805 { 00:21:44.805 "method": "bdev_nvme_set_hotplug", 00:21:44.805 "params": { 00:21:44.805 "period_us": 100000, 00:21:44.805 "enable": false 00:21:44.805 } 00:21:44.805 }, 00:21:44.805 { 00:21:44.805 "method": "bdev_enable_histogram", 00:21:44.805 "params": { 00:21:44.805 "name": "nvme0n1", 00:21:44.805 "enable": true 00:21:44.805 } 00:21:44.805 }, 00:21:44.805 { 00:21:44.805 "method": "bdev_wait_for_examine" 00:21:44.805 } 00:21:44.805 ] 00:21:44.805 }, 00:21:44.805 { 00:21:44.805 "subsystem": "nbd", 00:21:44.805 "config": [] 00:21:44.805 } 00:21:44.805 ] 00:21:44.805 }' 00:21:44.805 [2024-07-25 10:10:23.887511] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:44.805 [2024-07-25 10:10:23.887565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1335134 ] 00:21:44.805 EAL: No free 2048 kB hugepages reported on node 1 00:21:45.065 [2024-07-25 10:10:23.963545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.065 [2024-07-25 10:10:24.016949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.065 [2024-07-25 10:10:24.150431] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.637 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.637 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:45.637 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.637 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:45.899 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.899 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:45.899 Running I/O for 1 seconds... 00:21:46.841 00:21:46.841 Latency(us) 00:21:46.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.841 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:46.841 Verification LBA range: start 0x0 length 0x2000 00:21:46.841 nvme0n1 : 1.08 1623.60 6.34 0.00 0.00 76324.50 4915.20 117090.99 00:21:46.841 =================================================================================================================== 00:21:46.842 Total : 1623.60 6.34 0.00 0.00 76324.50 4915.20 117090.99 00:21:46.842 0 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:47.102 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:47.102 nvmf_trace.0 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1335134 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1335134 ']' 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1335134 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1335134 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1335134' 00:21:47.103 killing process with pid 1335134 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1335134 00:21:47.103 Received shutdown signal, test time was about 1.000000 seconds 00:21:47.103 00:21:47.103 Latency(us) 00:21:47.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.103 =================================================================================================================== 00:21:47.103 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:47.103 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1335134 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.364 rmmod nvme_tcp 00:21:47.364 rmmod nvme_fabrics 00:21:47.364 rmmod nvme_keyring 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1334940 ']' 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1334940 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1334940 ']' 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1334940 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1334940 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1334940' 00:21:47.364 killing process with pid 1334940 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1334940 00:21:47.364 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1334940 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.625 10:10:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.TmR28SnMSM /tmp/tmp.fq29gR4FDo /tmp/tmp.UcYOHjkV3h 00:21:49.539 00:21:49.539 real 1m23.289s 00:21:49.539 user 2m4.688s 00:21:49.539 sys 0m29.826s 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.539 ************************************ 00:21:49.539 END TEST nvmf_tls 00:21:49.539 ************************************ 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:49.539 10:10:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:49.801 ************************************ 00:21:49.801 START TEST nvmf_fips 00:21:49.801 ************************************ 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:49.801 * Looking for test storage... 00:21:49.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:49.801 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:49.802 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:50.064 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:50.065 Error setting digest 00:21:50.065 00F26732B97F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:50.065 00F26732B97F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.065 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:56.658 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.658 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:56.659 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:56.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:56.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.765 ms 00:21:56.659 00:21:56.659 --- 10.0.0.2 ping statistics --- 00:21:56.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.659 rtt min/avg/max/mdev = 0.765/0.765/0.765/0.000 ms 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:21:56.659 00:21:56.659 --- 10.0.0.1 ping statistics --- 00:21:56.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.659 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1339822 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1339822 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1339822 ']' 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:56.659 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:56.920 [2024-07-25 10:10:35.834543] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:56.920 [2024-07-25 10:10:35.834598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.920 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.920 [2024-07-25 10:10:35.920213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.920 [2024-07-25 10:10:36.013769] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.920 [2024-07-25 10:10:36.013830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.920 [2024-07-25 10:10:36.013839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.920 [2024-07-25 10:10:36.013846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.920 [2024-07-25 10:10:36.013852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.920 [2024-07-25 10:10:36.013884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.491 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.491 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:57.491 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.491 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:57.491 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:57.752 [2024-07-25 10:10:36.782463] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.752 [2024-07-25 10:10:36.798472] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.752 [2024-07-25 10:10:36.798742] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.752 [2024-07-25 10:10:36.828829] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:57.752 malloc0 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1339906 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1339906 /var/tmp/bdevperf.sock 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1339906 ']' 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:57.752 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:58.015 [2024-07-25 10:10:36.931833] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:58.015 [2024-07-25 10:10:36.931910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1339906 ] 00:21:58.015 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.015 [2024-07-25 10:10:36.987069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.015 [2024-07-25 10:10:37.051458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.587 10:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:58.587 10:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:58.587 10:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:58.848 [2024-07-25 10:10:37.835020] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.848 [2024-07-25 10:10:37.835084] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:58.848 TLSTESTn1 00:21:58.848 10:10:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:59.109 Running I/O for 10 seconds... 00:22:09.147 00:22:09.147 Latency(us) 00:22:09.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.147 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:09.147 Verification LBA range: start 0x0 length 0x2000 00:22:09.147 TLSTESTn1 : 10.08 2115.24 8.26 0.00 0.00 60299.76 5051.73 137188.69 00:22:09.147 =================================================================================================================== 00:22:09.147 Total : 2115.24 8.26 0.00 0.00 60299.76 5051.73 137188.69 00:22:09.147 0 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:09.147 nvmf_trace.0 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1339906 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1339906 ']' 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1339906 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.147 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339906 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339906' 00:22:09.409 killing process with pid 1339906 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1339906 00:22:09.409 Received shutdown signal, test time was about 10.000000 seconds 00:22:09.409 00:22:09.409 Latency(us) 00:22:09.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:09.409 =================================================================================================================== 00:22:09.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:09.409 [2024-07-25 10:10:48.311270] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1339906 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.409 rmmod nvme_tcp 00:22:09.409 rmmod nvme_fabrics 00:22:09.409 rmmod nvme_keyring 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1339822 ']' 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1339822 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1339822 ']' 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1339822 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:09.409 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1339822 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1339822' 00:22:09.671 killing process with pid 1339822 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1339822 00:22:09.671 [2024-07-25 10:10:48.549632] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1339822 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:09.671 10:10:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:12.222 00:22:12.222 real 0m22.076s 00:22:12.222 user 0m22.491s 00:22:12.222 sys 0m10.173s 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:12.222 ************************************ 00:22:12.222 END TEST nvmf_fips 00:22:12.222 ************************************ 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:12.222 10:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:18.833 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.833 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:18.833 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:18.833 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:18.834 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:18.834 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:18.834 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:18.834 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:18.834 ************************************ 00:22:18.834 START TEST nvmf_perf_adq 00:22:18.834 ************************************ 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:18.834 * Looking for test storage... 00:22:18.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.834 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:18.835 10:10:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:25.513 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:25.513 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:25.513 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:25.513 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:25.513 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:26.901 10:11:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:28.818 10:11:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.148 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:34.149 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:34.149 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:34.149 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.149 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:34.150 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.150 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:22:34.150 00:22:34.150 --- 10.0.0.2 ping statistics --- 00:22:34.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.150 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:22:34.150 00:22:34.150 --- 10.0.0.1 ping statistics --- 00:22:34.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.150 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.150 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1351759 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1351759 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1351759 ']' 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:34.411 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:34.411 [2024-07-25 10:11:13.375806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:34.411 [2024-07-25 10:11:13.375873] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.411 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.411 [2024-07-25 10:11:13.446771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.411 [2024-07-25 10:11:13.522179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.411 [2024-07-25 10:11:13.522226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.411 [2024-07-25 10:11:13.522234] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.411 [2024-07-25 10:11:13.522240] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.411 [2024-07-25 10:11:13.522246] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.411 [2024-07-25 10:11:13.522336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.411 [2024-07-25 10:11:13.522469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.411 [2024-07-25 10:11:13.522901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.411 [2024-07-25 10:11:13.522900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 [2024-07-25 10:11:14.328531] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 Malloc1 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.354 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:35.355 [2024-07-25 10:11:14.387855] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1352110 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:35.355 10:11:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:35.355 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.271 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:37.271 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.271 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:37.530 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.530 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:37.530 "tick_rate": 2400000000, 00:22:37.530 "poll_groups": [ 00:22:37.530 { 00:22:37.530 "name": "nvmf_tgt_poll_group_000", 00:22:37.530 "admin_qpairs": 1, 00:22:37.530 "io_qpairs": 1, 00:22:37.530 "current_admin_qpairs": 1, 00:22:37.530 "current_io_qpairs": 1, 00:22:37.530 "pending_bdev_io": 0, 00:22:37.530 "completed_nvme_io": 21399, 00:22:37.530 "transports": [ 00:22:37.530 { 00:22:37.530 "trtype": "TCP" 00:22:37.530 } 00:22:37.530 ] 00:22:37.530 }, 00:22:37.530 { 00:22:37.530 "name": "nvmf_tgt_poll_group_001", 00:22:37.530 "admin_qpairs": 0, 00:22:37.530 "io_qpairs": 1, 00:22:37.530 "current_admin_qpairs": 0, 00:22:37.530 "current_io_qpairs": 1, 00:22:37.530 "pending_bdev_io": 0, 00:22:37.530 "completed_nvme_io": 27871, 00:22:37.530 "transports": [ 00:22:37.530 { 00:22:37.530 "trtype": "TCP" 00:22:37.530 } 00:22:37.530 ] 00:22:37.530 }, 00:22:37.530 { 00:22:37.530 "name": "nvmf_tgt_poll_group_002", 00:22:37.530 "admin_qpairs": 0, 00:22:37.530 "io_qpairs": 1, 00:22:37.530 "current_admin_qpairs": 0, 00:22:37.531 "current_io_qpairs": 1, 00:22:37.531 "pending_bdev_io": 0, 00:22:37.531 "completed_nvme_io": 20111, 00:22:37.531 "transports": [ 00:22:37.531 { 00:22:37.531 "trtype": "TCP" 00:22:37.531 } 00:22:37.531 ] 00:22:37.531 }, 00:22:37.531 { 00:22:37.531 "name": "nvmf_tgt_poll_group_003", 00:22:37.531 "admin_qpairs": 0, 00:22:37.531 "io_qpairs": 1, 00:22:37.531 "current_admin_qpairs": 0, 00:22:37.531 "current_io_qpairs": 1, 00:22:37.531 "pending_bdev_io": 0, 00:22:37.531 "completed_nvme_io": 19203, 00:22:37.531 "transports": [ 00:22:37.531 { 00:22:37.531 "trtype": "TCP" 00:22:37.531 } 00:22:37.531 ] 00:22:37.531 } 00:22:37.531 ] 00:22:37.531 }' 00:22:37.531 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:37.531 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:37.531 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:37.531 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:37.531 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1352110 00:22:45.724 Initializing NVMe Controllers 00:22:45.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:45.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:45.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:45.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:45.724 Initialization complete. Launching workers. 00:22:45.724 ======================================================== 00:22:45.724 Latency(us) 00:22:45.724 Device Information : IOPS MiB/s Average min max 00:22:45.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10710.50 41.84 5975.42 1329.26 9923.31 00:22:45.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14607.80 57.06 4381.14 969.80 11692.25 00:22:45.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13769.00 53.79 4647.99 1370.71 12106.73 00:22:45.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15063.60 58.84 4248.41 1382.34 11133.25 00:22:45.724 ======================================================== 00:22:45.724 Total : 54150.89 211.53 4727.40 969.80 12106.73 00:22:45.724 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.724 rmmod nvme_tcp 00:22:45.724 rmmod nvme_fabrics 00:22:45.724 rmmod nvme_keyring 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1351759 ']' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1351759 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1351759 ']' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1351759 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1351759 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1351759' 00:22:45.724 killing process with pid 1351759 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1351759 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1351759 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.724 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.271 10:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:48.271 10:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:48.271 10:11:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:49.658 10:11:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:51.579 10:11:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:56.873 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:56.873 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:56.873 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:56.873 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.873 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:22:56.874 00:22:56.874 --- 10.0.0.2 ping statistics --- 00:22:56.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.874 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:22:56.874 00:22:56.874 --- 10.0.0.1 ping statistics --- 00:22:56.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.874 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:56.874 net.core.busy_poll = 1 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:56.874 net.core.busy_read = 1 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:56.874 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:57.136 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1356577 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1356577 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1356577 ']' 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.137 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.137 [2024-07-25 10:11:36.181125] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.137 [2024-07-25 10:11:36.181193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.137 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.137 [2024-07-25 10:11:36.253091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:57.398 [2024-07-25 10:11:36.329651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.398 [2024-07-25 10:11:36.329691] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.398 [2024-07-25 10:11:36.329699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.398 [2024-07-25 10:11:36.329705] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.398 [2024-07-25 10:11:36.329711] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.398 [2024-07-25 10:11:36.329846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:57.398 [2024-07-25 10:11:36.329979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:57.398 [2024-07-25 10:11:36.330136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.398 [2024-07-25 10:11:36.330137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:57.971 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:57.971 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:57.971 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:57.971 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:57.971 10:11:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.971 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:57.972 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.269 [2024-07-25 10:11:37.138544] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.269 Malloc1 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.269 [2024-07-25 10:11:37.197904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1356921 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:58.269 10:11:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:58.269 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:00.201 "tick_rate": 2400000000, 00:23:00.201 "poll_groups": [ 00:23:00.201 { 00:23:00.201 "name": "nvmf_tgt_poll_group_000", 00:23:00.201 "admin_qpairs": 1, 00:23:00.201 "io_qpairs": 4, 00:23:00.201 "current_admin_qpairs": 1, 00:23:00.201 "current_io_qpairs": 4, 00:23:00.201 "pending_bdev_io": 0, 00:23:00.201 "completed_nvme_io": 36287, 00:23:00.201 "transports": [ 00:23:00.201 { 00:23:00.201 "trtype": "TCP" 00:23:00.201 } 00:23:00.201 ] 00:23:00.201 }, 00:23:00.201 { 00:23:00.201 "name": "nvmf_tgt_poll_group_001", 00:23:00.201 "admin_qpairs": 0, 00:23:00.201 "io_qpairs": 0, 00:23:00.201 "current_admin_qpairs": 0, 00:23:00.201 "current_io_qpairs": 0, 00:23:00.201 "pending_bdev_io": 0, 00:23:00.201 "completed_nvme_io": 0, 00:23:00.201 "transports": [ 00:23:00.201 { 00:23:00.201 "trtype": "TCP" 00:23:00.201 } 00:23:00.201 ] 00:23:00.201 }, 00:23:00.201 { 00:23:00.201 "name": "nvmf_tgt_poll_group_002", 00:23:00.201 "admin_qpairs": 0, 00:23:00.201 "io_qpairs": 0, 00:23:00.201 "current_admin_qpairs": 0, 00:23:00.201 "current_io_qpairs": 0, 00:23:00.201 "pending_bdev_io": 0, 00:23:00.201 "completed_nvme_io": 0, 00:23:00.201 "transports": [ 00:23:00.201 { 00:23:00.201 "trtype": "TCP" 00:23:00.201 } 00:23:00.201 ] 00:23:00.201 }, 00:23:00.201 { 00:23:00.201 "name": "nvmf_tgt_poll_group_003", 00:23:00.201 "admin_qpairs": 0, 00:23:00.201 "io_qpairs": 0, 00:23:00.201 "current_admin_qpairs": 0, 00:23:00.201 "current_io_qpairs": 0, 00:23:00.201 "pending_bdev_io": 0, 00:23:00.201 "completed_nvme_io": 0, 00:23:00.201 "transports": [ 00:23:00.201 { 00:23:00.201 "trtype": "TCP" 00:23:00.201 } 00:23:00.201 ] 00:23:00.201 } 00:23:00.201 ] 00:23:00.201 }' 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:23:00.201 10:11:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1356921 00:23:08.341 Initializing NVMe Controllers 00:23:08.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:08.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:08.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:08.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:08.341 Initialization complete. Launching workers. 00:23:08.341 ======================================================== 00:23:08.341 Latency(us) 00:23:08.341 Device Information : IOPS MiB/s Average min max 00:23:08.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6047.40 23.62 10609.99 1371.76 57523.24 00:23:08.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6302.10 24.62 10155.11 1326.05 59858.33 00:23:08.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6144.40 24.00 10448.66 1503.52 57207.81 00:23:08.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6012.40 23.49 10647.28 1372.00 59043.56 00:23:08.341 ======================================================== 00:23:08.341 Total : 24506.30 95.73 10461.71 1326.05 59858.33 00:23:08.341 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.341 rmmod nvme_tcp 00:23:08.341 rmmod nvme_fabrics 00:23:08.341 rmmod nvme_keyring 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1356577 ']' 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1356577 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1356577 ']' 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1356577 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:08.341 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1356577 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1356577' 00:23:08.603 killing process with pid 1356577 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1356577 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1356577 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.603 10:11:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:11.151 00:23:11.151 real 0m52.165s 00:23:11.151 user 2m49.675s 00:23:11.151 sys 0m10.379s 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.151 ************************************ 00:23:11.151 END TEST nvmf_perf_adq 00:23:11.151 ************************************ 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.151 ************************************ 00:23:11.151 START TEST nvmf_shutdown 00:23:11.151 ************************************ 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:11.151 * Looking for test storage... 00:23:11.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.151 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:11.152 ************************************ 00:23:11.152 START TEST nvmf_shutdown_tc1 00:23:11.152 ************************************ 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:11.152 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:17.740 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.740 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.740 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.740 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.740 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.740 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:17.741 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:17.741 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:17.741 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:17.741 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.741 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.002 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.002 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.002 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:18.002 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.002 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.002 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.002 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:18.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:23:18.002 00:23:18.002 --- 10.0.0.2 ping statistics --- 00:23:18.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.002 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:23:18.002 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:23:18.265 00:23:18.265 --- 10.0.0.1 ping statistics --- 00:23:18.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.265 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1363070 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1363070 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1363070 ']' 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.265 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:18.265 [2024-07-25 10:11:57.251935] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:18.265 [2024-07-25 10:11:57.252002] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.265 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.265 [2024-07-25 10:11:57.341642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.527 [2024-07-25 10:11:57.435969] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.527 [2024-07-25 10:11:57.436028] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.527 [2024-07-25 10:11:57.436037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.527 [2024-07-25 10:11:57.436044] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.527 [2024-07-25 10:11:57.436050] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.527 [2024-07-25 10:11:57.436209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.527 [2024-07-25 10:11:57.436377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.527 [2024-07-25 10:11:57.436607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.527 [2024-07-25 10:11:57.436609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.099 [2024-07-25 10:11:58.082063] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.099 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.099 Malloc1 00:23:19.099 [2024-07-25 10:11:58.185511] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.099 Malloc2 00:23:19.359 Malloc3 00:23:19.359 Malloc4 00:23:19.359 Malloc5 00:23:19.359 Malloc6 00:23:19.359 Malloc7 00:23:19.359 Malloc8 00:23:19.359 Malloc9 00:23:19.621 Malloc10 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1363427 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1363427 /var/tmp/bdevperf.sock 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1363427 ']' 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.621 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 [2024-07-25 10:11:58.630558] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:19.622 [2024-07-25 10:11:58.630612] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.622 } 00:23:19.622 EOF 00:23:19.622 )") 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.622 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.622 { 00:23:19.622 "params": { 00:23:19.622 "name": "Nvme$subsystem", 00:23:19.622 "trtype": "$TEST_TRANSPORT", 00:23:19.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.622 "adrfam": "ipv4", 00:23:19.622 "trsvcid": "$NVMF_PORT", 00:23:19.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.622 "hdgst": ${hdgst:-false}, 00:23:19.622 "ddgst": ${ddgst:-false} 00:23:19.622 }, 00:23:19.622 "method": "bdev_nvme_attach_controller" 00:23:19.623 } 00:23:19.623 EOF 00:23:19.623 )") 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:19.623 { 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme$subsystem", 00:23:19.623 "trtype": "$TEST_TRANSPORT", 00:23:19.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "$NVMF_PORT", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.623 "hdgst": ${hdgst:-false}, 00:23:19.623 "ddgst": ${ddgst:-false} 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 } 00:23:19.623 EOF 00:23:19.623 )") 00:23:19.623 EAL: No free 2048 kB hugepages reported on node 1 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:19.623 10:11:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme1", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme2", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme3", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme4", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme5", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme6", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme7", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme8", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme9", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 },{ 00:23:19.623 "params": { 00:23:19.623 "name": "Nvme10", 00:23:19.623 "trtype": "tcp", 00:23:19.623 "traddr": "10.0.0.2", 00:23:19.623 "adrfam": "ipv4", 00:23:19.623 "trsvcid": "4420", 00:23:19.623 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:19.623 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:19.623 "hdgst": false, 00:23:19.623 "ddgst": false 00:23:19.623 }, 00:23:19.623 "method": "bdev_nvme_attach_controller" 00:23:19.623 }' 00:23:19.623 [2024-07-25 10:11:58.691077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.883 [2024-07-25 10:11:58.755873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1363427 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:21.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1363427 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:21.267 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1363070 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.208 { 00:23:22.208 "params": { 00:23:22.208 "name": "Nvme$subsystem", 00:23:22.208 "trtype": "$TEST_TRANSPORT", 00:23:22.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.208 "adrfam": "ipv4", 00:23:22.208 "trsvcid": "$NVMF_PORT", 00:23:22.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.208 "hdgst": ${hdgst:-false}, 00:23:22.208 "ddgst": ${ddgst:-false} 00:23:22.208 }, 00:23:22.208 "method": "bdev_nvme_attach_controller" 00:23:22.208 } 00:23:22.208 EOF 00:23:22.208 )") 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.208 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.208 { 00:23:22.208 "params": { 00:23:22.208 "name": "Nvme$subsystem", 00:23:22.208 "trtype": "$TEST_TRANSPORT", 00:23:22.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.208 "adrfam": "ipv4", 00:23:22.208 "trsvcid": "$NVMF_PORT", 00:23:22.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.208 "hdgst": ${hdgst:-false}, 00:23:22.208 "ddgst": ${ddgst:-false} 00:23:22.208 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.209 { 00:23:22.209 "params": { 00:23:22.209 "name": "Nvme$subsystem", 00:23:22.209 "trtype": "$TEST_TRANSPORT", 00:23:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.209 "adrfam": "ipv4", 00:23:22.209 "trsvcid": "$NVMF_PORT", 00:23:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.209 "hdgst": ${hdgst:-false}, 00:23:22.209 "ddgst": ${ddgst:-false} 00:23:22.209 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.209 { 00:23:22.209 "params": { 00:23:22.209 "name": "Nvme$subsystem", 00:23:22.209 "trtype": "$TEST_TRANSPORT", 00:23:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.209 "adrfam": "ipv4", 00:23:22.209 "trsvcid": "$NVMF_PORT", 00:23:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.209 "hdgst": ${hdgst:-false}, 00:23:22.209 "ddgst": ${ddgst:-false} 00:23:22.209 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.209 { 00:23:22.209 "params": { 00:23:22.209 "name": "Nvme$subsystem", 00:23:22.209 "trtype": "$TEST_TRANSPORT", 00:23:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.209 "adrfam": "ipv4", 00:23:22.209 "trsvcid": "$NVMF_PORT", 00:23:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.209 "hdgst": ${hdgst:-false}, 00:23:22.209 "ddgst": ${ddgst:-false} 00:23:22.209 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.209 { 00:23:22.209 "params": { 00:23:22.209 "name": "Nvme$subsystem", 00:23:22.209 "trtype": "$TEST_TRANSPORT", 00:23:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.209 "adrfam": "ipv4", 00:23:22.209 "trsvcid": "$NVMF_PORT", 00:23:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.209 "hdgst": ${hdgst:-false}, 00:23:22.209 "ddgst": ${ddgst:-false} 00:23:22.209 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.209 [2024-07-25 10:12:01.294933] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:22.209 [2024-07-25 10:12:01.294985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364150 ] 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.209 { 00:23:22.209 "params": { 00:23:22.209 "name": "Nvme$subsystem", 00:23:22.209 "trtype": "$TEST_TRANSPORT", 00:23:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.209 "adrfam": "ipv4", 00:23:22.209 "trsvcid": "$NVMF_PORT", 00:23:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.209 "hdgst": ${hdgst:-false}, 00:23:22.209 "ddgst": ${ddgst:-false} 00:23:22.209 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.209 { 00:23:22.209 "params": { 00:23:22.209 "name": "Nvme$subsystem", 00:23:22.209 "trtype": "$TEST_TRANSPORT", 00:23:22.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.209 "adrfam": "ipv4", 00:23:22.209 "trsvcid": "$NVMF_PORT", 00:23:22.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.209 "hdgst": ${hdgst:-false}, 00:23:22.209 "ddgst": ${ddgst:-false} 00:23:22.209 }, 00:23:22.209 "method": "bdev_nvme_attach_controller" 00:23:22.209 } 00:23:22.209 EOF 00:23:22.209 )") 00:23:22.209 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.210 { 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme$subsystem", 00:23:22.210 "trtype": "$TEST_TRANSPORT", 00:23:22.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "$NVMF_PORT", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.210 "hdgst": ${hdgst:-false}, 00:23:22.210 "ddgst": ${ddgst:-false} 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 } 00:23:22.210 EOF 00:23:22.210 )") 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:22.210 { 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme$subsystem", 00:23:22.210 "trtype": "$TEST_TRANSPORT", 00:23:22.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "$NVMF_PORT", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:22.210 "hdgst": ${hdgst:-false}, 00:23:22.210 "ddgst": ${ddgst:-false} 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 } 00:23:22.210 EOF 00:23:22.210 )") 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:22.210 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:22.210 10:12:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme1", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme2", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme3", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme4", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme5", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme6", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme7", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme8", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme9", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 },{ 00:23:22.210 "params": { 00:23:22.210 "name": "Nvme10", 00:23:22.210 "trtype": "tcp", 00:23:22.210 "traddr": "10.0.0.2", 00:23:22.210 "adrfam": "ipv4", 00:23:22.210 "trsvcid": "4420", 00:23:22.210 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:22.210 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:22.210 "hdgst": false, 00:23:22.210 "ddgst": false 00:23:22.210 }, 00:23:22.210 "method": "bdev_nvme_attach_controller" 00:23:22.210 }' 00:23:22.471 [2024-07-25 10:12:01.354797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.471 [2024-07-25 10:12:01.418956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.858 Running I/O for 1 seconds... 00:23:24.837 00:23:24.837 Latency(us) 00:23:24.837 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.837 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme1n1 : 1.19 215.40 13.46 0.00 0.00 294033.71 24357.55 304087.04 00:23:24.837 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme2n1 : 1.15 166.76 10.42 0.00 0.00 373011.34 22719.15 321563.31 00:23:24.837 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme3n1 : 1.17 163.70 10.23 0.00 0.00 374258.92 26869.76 332049.07 00:23:24.837 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme4n1 : 1.19 268.77 16.80 0.00 0.00 224171.35 24139.09 263891.63 00:23:24.837 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme5n1 : 1.20 267.17 16.70 0.00 0.00 221664.77 23811.41 249910.61 00:23:24.837 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme6n1 : 1.19 323.37 20.21 0.00 0.00 179595.24 19660.80 239424.85 00:23:24.837 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme7n1 : 1.21 211.37 13.21 0.00 0.00 270854.40 19551.57 286610.77 00:23:24.837 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme8n1 : 1.21 264.75 16.55 0.00 0.00 212308.48 15073.28 241172.48 00:23:24.837 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme9n1 : 1.17 164.34 10.27 0.00 0.00 334014.01 44346.03 337291.95 00:23:24.837 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:24.837 Verification LBA range: start 0x0 length 0x400 00:23:24.837 Nvme10n1 : 1.18 272.00 17.00 0.00 0.00 198280.87 24029.87 219327.15 00:23:24.837 =================================================================================================================== 00:23:24.837 Total : 2317.63 144.85 0.00 0.00 252630.33 15073.28 337291.95 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:25.098 rmmod nvme_tcp 00:23:25.098 rmmod nvme_fabrics 00:23:25.098 rmmod nvme_keyring 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1363070 ']' 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1363070 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1363070 ']' 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1363070 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1363070 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1363070' 00:23:25.098 killing process with pid 1363070 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1363070 00:23:25.098 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1363070 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:25.358 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:27.914 00:23:27.914 real 0m16.542s 00:23:27.914 user 0m33.764s 00:23:27.914 sys 0m6.631s 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.914 ************************************ 00:23:27.914 END TEST nvmf_shutdown_tc1 00:23:27.914 ************************************ 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:27.914 ************************************ 00:23:27.914 START TEST nvmf_shutdown_tc2 00:23:27.914 ************************************ 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.914 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:27.915 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:27.915 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:27.915 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:27.915 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.915 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:23:27.916 00:23:27.916 --- 10.0.0.2 ping statistics --- 00:23:27.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.916 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:23:27.916 00:23:27.916 --- 10.0.0.1 ping statistics --- 00:23:27.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.916 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1365342 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1365342 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1365342 ']' 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:27.916 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:27.916 [2024-07-25 10:12:07.028809] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:27.916 [2024-07-25 10:12:07.028900] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.177 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.177 [2024-07-25 10:12:07.117155] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.177 [2024-07-25 10:12:07.178325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.177 [2024-07-25 10:12:07.178360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.177 [2024-07-25 10:12:07.178365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.177 [2024-07-25 10:12:07.178370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.177 [2024-07-25 10:12:07.178374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.177 [2024-07-25 10:12:07.178483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.177 [2024-07-25 10:12:07.178628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.177 [2024-07-25 10:12:07.178782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.177 [2024-07-25 10:12:07.178784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.747 [2024-07-25 10:12:07.856681] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:28.747 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.006 10:12:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.006 Malloc1 00:23:29.006 [2024-07-25 10:12:07.955301] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.006 Malloc2 00:23:29.006 Malloc3 00:23:29.006 Malloc4 00:23:29.006 Malloc5 00:23:29.006 Malloc6 00:23:29.266 Malloc7 00:23:29.266 Malloc8 00:23:29.266 Malloc9 00:23:29.266 Malloc10 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1365722 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1365722 /var/tmp/bdevperf.sock 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1365722 ']' 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.266 { 00:23:29.266 "params": { 00:23:29.266 "name": "Nvme$subsystem", 00:23:29.266 "trtype": "$TEST_TRANSPORT", 00:23:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.266 "adrfam": "ipv4", 00:23:29.266 "trsvcid": "$NVMF_PORT", 00:23:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.266 "hdgst": ${hdgst:-false}, 00:23:29.266 "ddgst": ${ddgst:-false} 00:23:29.266 }, 00:23:29.266 "method": "bdev_nvme_attach_controller" 00:23:29.266 } 00:23:29.266 EOF 00:23:29.266 )") 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.266 { 00:23:29.266 "params": { 00:23:29.266 "name": "Nvme$subsystem", 00:23:29.266 "trtype": "$TEST_TRANSPORT", 00:23:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.266 "adrfam": "ipv4", 00:23:29.266 "trsvcid": "$NVMF_PORT", 00:23:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.266 "hdgst": ${hdgst:-false}, 00:23:29.266 "ddgst": ${ddgst:-false} 00:23:29.266 }, 00:23:29.266 "method": "bdev_nvme_attach_controller" 00:23:29.266 } 00:23:29.266 EOF 00:23:29.266 )") 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.266 { 00:23:29.266 "params": { 00:23:29.266 "name": "Nvme$subsystem", 00:23:29.266 "trtype": "$TEST_TRANSPORT", 00:23:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.266 "adrfam": "ipv4", 00:23:29.266 "trsvcid": "$NVMF_PORT", 00:23:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.266 "hdgst": ${hdgst:-false}, 00:23:29.266 "ddgst": ${ddgst:-false} 00:23:29.266 }, 00:23:29.266 "method": "bdev_nvme_attach_controller" 00:23:29.266 } 00:23:29.266 EOF 00:23:29.266 )") 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.266 { 00:23:29.266 "params": { 00:23:29.266 "name": "Nvme$subsystem", 00:23:29.266 "trtype": "$TEST_TRANSPORT", 00:23:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.266 "adrfam": "ipv4", 00:23:29.266 "trsvcid": "$NVMF_PORT", 00:23:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.266 "hdgst": ${hdgst:-false}, 00:23:29.266 "ddgst": ${ddgst:-false} 00:23:29.266 }, 00:23:29.266 "method": "bdev_nvme_attach_controller" 00:23:29.266 } 00:23:29.266 EOF 00:23:29.266 )") 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.266 { 00:23:29.266 "params": { 00:23:29.266 "name": "Nvme$subsystem", 00:23:29.266 "trtype": "$TEST_TRANSPORT", 00:23:29.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.266 "adrfam": "ipv4", 00:23:29.266 "trsvcid": "$NVMF_PORT", 00:23:29.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.266 "hdgst": ${hdgst:-false}, 00:23:29.266 "ddgst": ${ddgst:-false} 00:23:29.266 }, 00:23:29.266 "method": "bdev_nvme_attach_controller" 00:23:29.266 } 00:23:29.266 EOF 00:23:29.266 )") 00:23:29.266 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.526 { 00:23:29.526 "params": { 00:23:29.526 "name": "Nvme$subsystem", 00:23:29.526 "trtype": "$TEST_TRANSPORT", 00:23:29.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.526 "adrfam": "ipv4", 00:23:29.526 "trsvcid": "$NVMF_PORT", 00:23:29.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.526 "hdgst": ${hdgst:-false}, 00:23:29.526 "ddgst": ${ddgst:-false} 00:23:29.526 }, 00:23:29.526 "method": "bdev_nvme_attach_controller" 00:23:29.526 } 00:23:29.526 EOF 00:23:29.526 )") 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.526 [2024-07-25 10:12:08.406370] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:29.526 [2024-07-25 10:12:08.406422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1365722 ] 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.526 { 00:23:29.526 "params": { 00:23:29.526 "name": "Nvme$subsystem", 00:23:29.526 "trtype": "$TEST_TRANSPORT", 00:23:29.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.526 "adrfam": "ipv4", 00:23:29.526 "trsvcid": "$NVMF_PORT", 00:23:29.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.526 "hdgst": ${hdgst:-false}, 00:23:29.526 "ddgst": ${ddgst:-false} 00:23:29.526 }, 00:23:29.526 "method": "bdev_nvme_attach_controller" 00:23:29.526 } 00:23:29.526 EOF 00:23:29.526 )") 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.526 { 00:23:29.526 "params": { 00:23:29.526 "name": "Nvme$subsystem", 00:23:29.526 "trtype": "$TEST_TRANSPORT", 00:23:29.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.526 "adrfam": "ipv4", 00:23:29.526 "trsvcid": "$NVMF_PORT", 00:23:29.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.526 "hdgst": ${hdgst:-false}, 00:23:29.526 "ddgst": ${ddgst:-false} 00:23:29.526 }, 00:23:29.526 "method": "bdev_nvme_attach_controller" 00:23:29.526 } 00:23:29.526 EOF 00:23:29.526 )") 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.526 { 00:23:29.526 "params": { 00:23:29.526 "name": "Nvme$subsystem", 00:23:29.526 "trtype": "$TEST_TRANSPORT", 00:23:29.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.526 "adrfam": "ipv4", 00:23:29.526 "trsvcid": "$NVMF_PORT", 00:23:29.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.526 "hdgst": ${hdgst:-false}, 00:23:29.526 "ddgst": ${ddgst:-false} 00:23:29.526 }, 00:23:29.526 "method": "bdev_nvme_attach_controller" 00:23:29.526 } 00:23:29.526 EOF 00:23:29.526 )") 00:23:29.526 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.527 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:29.527 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:29.527 { 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme$subsystem", 00:23:29.527 "trtype": "$TEST_TRANSPORT", 00:23:29.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "$NVMF_PORT", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:29.527 "hdgst": ${hdgst:-false}, 00:23:29.527 "ddgst": ${ddgst:-false} 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 } 00:23:29.527 EOF 00:23:29.527 )") 00:23:29.527 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:29.527 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.527 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:29.527 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:29.527 10:12:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme1", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme2", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme3", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme4", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme5", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme6", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme7", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme8", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme9", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 },{ 00:23:29.527 "params": { 00:23:29.527 "name": "Nvme10", 00:23:29.527 "trtype": "tcp", 00:23:29.527 "traddr": "10.0.0.2", 00:23:29.527 "adrfam": "ipv4", 00:23:29.527 "trsvcid": "4420", 00:23:29.527 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:29.527 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:29.527 "hdgst": false, 00:23:29.527 "ddgst": false 00:23:29.527 }, 00:23:29.527 "method": "bdev_nvme_attach_controller" 00:23:29.527 }' 00:23:29.527 [2024-07-25 10:12:08.466168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.527 [2024-07-25 10:12:08.530924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.441 Running I/O for 10 seconds... 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:31.441 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=129 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 129 -ge 100 ']' 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1365722 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1365722 ']' 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1365722 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1365722 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1365722' 00:23:31.703 killing process with pid 1365722 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1365722 00:23:31.703 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1365722 00:23:31.703 Received shutdown signal, test time was about 0.689408 seconds 00:23:31.703 00:23:31.703 Latency(us) 00:23:31.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.703 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme1n1 : 0.65 294.43 18.40 0.00 0.00 213722.17 21080.75 225443.84 00:23:31.703 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme2n1 : 0.68 283.91 17.74 0.00 0.00 215305.67 24139.09 230686.72 00:23:31.703 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme3n1 : 0.67 191.57 11.97 0.00 0.00 308613.12 41506.13 260396.37 00:23:31.703 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme4n1 : 0.69 185.91 11.62 0.00 0.00 309268.05 17913.17 353020.59 00:23:31.703 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme5n1 : 0.67 284.78 17.80 0.00 0.00 194859.52 18350.08 207967.57 00:23:31.703 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme6n1 : 0.63 203.39 12.71 0.00 0.00 260071.68 22937.60 235929.60 00:23:31.703 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme7n1 : 0.64 319.18 19.95 0.00 0.00 157418.83 5679.79 204472.32 00:23:31.703 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme8n1 : 0.64 199.81 12.49 0.00 0.00 247210.67 25122.13 217579.52 00:23:31.703 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme9n1 : 0.62 103.10 6.44 0.00 0.00 456239.79 63788.37 401954.13 00:23:31.703 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:31.703 Verification LBA range: start 0x0 length 0x400 00:23:31.703 Nvme10n1 : 0.65 195.59 12.22 0.00 0.00 235047.25 45219.84 208841.39 00:23:31.703 =================================================================================================================== 00:23:31.703 Total : 2261.67 141.35 0.00 0.00 239412.49 5679.79 401954.13 00:23:31.963 10:12:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1365342 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:32.907 rmmod nvme_tcp 00:23:32.907 rmmod nvme_fabrics 00:23:32.907 rmmod nvme_keyring 00:23:32.907 10:12:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1365342 ']' 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1365342 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1365342 ']' 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1365342 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.907 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1365342 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1365342' 00:23:33.168 killing process with pid 1365342 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1365342 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1365342 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.168 10:12:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:35.716 00:23:35.716 real 0m7.794s 00:23:35.716 user 0m22.966s 00:23:35.716 sys 0m1.309s 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:35.716 ************************************ 00:23:35.716 END TEST nvmf_shutdown_tc2 00:23:35.716 ************************************ 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:35.716 ************************************ 00:23:35.716 START TEST nvmf_shutdown_tc3 00:23:35.716 ************************************ 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.716 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:35.717 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:35.717 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:35.717 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:35.717 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:35.717 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:35.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:35.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:23:35.718 00:23:35.718 --- 10.0.0.2 ping statistics --- 00:23:35.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.718 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:35.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:35.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:23:35.718 00:23:35.718 --- 10.0.0.1 ping statistics --- 00:23:35.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:35.718 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1367455 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1367455 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1367455 ']' 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:35.718 10:12:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:35.979 [2024-07-25 10:12:14.911513] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:35.979 [2024-07-25 10:12:14.911578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:35.979 EAL: No free 2048 kB hugepages reported on node 1 00:23:35.979 [2024-07-25 10:12:15.002772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:35.979 [2024-07-25 10:12:15.074545] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:35.979 [2024-07-25 10:12:15.074592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:35.979 [2024-07-25 10:12:15.074598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:35.979 [2024-07-25 10:12:15.074602] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:35.979 [2024-07-25 10:12:15.074607] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:35.979 [2024-07-25 10:12:15.074774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:35.979 [2024-07-25 10:12:15.074943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:35.979 [2024-07-25 10:12:15.075061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.980 [2024-07-25 10:12:15.075064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:36.557 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.557 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:36.557 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.557 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:36.557 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 [2024-07-25 10:12:15.732663] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:36.818 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:36.819 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.819 10:12:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:36.819 Malloc1 00:23:36.819 [2024-07-25 10:12:15.831420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.819 Malloc2 00:23:36.819 Malloc3 00:23:36.819 Malloc4 00:23:37.080 Malloc5 00:23:37.080 Malloc6 00:23:37.080 Malloc7 00:23:37.080 Malloc8 00:23:37.080 Malloc9 00:23:37.080 Malloc10 00:23:37.080 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.080 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:37.080 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:37.080 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1367709 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1367709 /var/tmp/bdevperf.sock 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1367709 ']' 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.342 { 00:23:37.342 "params": { 00:23:37.342 "name": "Nvme$subsystem", 00:23:37.342 "trtype": "$TEST_TRANSPORT", 00:23:37.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.342 "adrfam": "ipv4", 00:23:37.342 "trsvcid": "$NVMF_PORT", 00:23:37.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.342 "hdgst": ${hdgst:-false}, 00:23:37.342 "ddgst": ${ddgst:-false} 00:23:37.342 }, 00:23:37.342 "method": "bdev_nvme_attach_controller" 00:23:37.342 } 00:23:37.342 EOF 00:23:37.342 )") 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.342 { 00:23:37.342 "params": { 00:23:37.342 "name": "Nvme$subsystem", 00:23:37.342 "trtype": "$TEST_TRANSPORT", 00:23:37.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.342 "adrfam": "ipv4", 00:23:37.342 "trsvcid": "$NVMF_PORT", 00:23:37.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.342 "hdgst": ${hdgst:-false}, 00:23:37.342 "ddgst": ${ddgst:-false} 00:23:37.342 }, 00:23:37.342 "method": "bdev_nvme_attach_controller" 00:23:37.342 } 00:23:37.342 EOF 00:23:37.342 )") 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.342 { 00:23:37.342 "params": { 00:23:37.342 "name": "Nvme$subsystem", 00:23:37.342 "trtype": "$TEST_TRANSPORT", 00:23:37.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.342 "adrfam": "ipv4", 00:23:37.342 "trsvcid": "$NVMF_PORT", 00:23:37.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.342 "hdgst": ${hdgst:-false}, 00:23:37.342 "ddgst": ${ddgst:-false} 00:23:37.342 }, 00:23:37.342 "method": "bdev_nvme_attach_controller" 00:23:37.342 } 00:23:37.342 EOF 00:23:37.342 )") 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.342 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.342 { 00:23:37.342 "params": { 00:23:37.342 "name": "Nvme$subsystem", 00:23:37.342 "trtype": "$TEST_TRANSPORT", 00:23:37.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.342 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.343 { 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme$subsystem", 00:23:37.343 "trtype": "$TEST_TRANSPORT", 00:23:37.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.343 { 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme$subsystem", 00:23:37.343 "trtype": "$TEST_TRANSPORT", 00:23:37.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 [2024-07-25 10:12:16.285926] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:37.343 [2024-07-25 10:12:16.285979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1367709 ] 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.343 { 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme$subsystem", 00:23:37.343 "trtype": "$TEST_TRANSPORT", 00:23:37.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.343 { 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme$subsystem", 00:23:37.343 "trtype": "$TEST_TRANSPORT", 00:23:37.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.343 { 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme$subsystem", 00:23:37.343 "trtype": "$TEST_TRANSPORT", 00:23:37.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:37.343 { 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme$subsystem", 00:23:37.343 "trtype": "$TEST_TRANSPORT", 00:23:37.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "$NVMF_PORT", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.343 "hdgst": ${hdgst:-false}, 00:23:37.343 "ddgst": ${ddgst:-false} 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 } 00:23:37.343 EOF 00:23:37.343 )") 00:23:37.343 EAL: No free 2048 kB hugepages reported on node 1 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:37.343 10:12:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme1", 00:23:37.343 "trtype": "tcp", 00:23:37.343 "traddr": "10.0.0.2", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "4420", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.343 "hdgst": false, 00:23:37.343 "ddgst": false 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 },{ 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme2", 00:23:37.343 "trtype": "tcp", 00:23:37.343 "traddr": "10.0.0.2", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "4420", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:37.343 "hdgst": false, 00:23:37.343 "ddgst": false 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 },{ 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme3", 00:23:37.343 "trtype": "tcp", 00:23:37.343 "traddr": "10.0.0.2", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "4420", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:37.343 "hdgst": false, 00:23:37.343 "ddgst": false 00:23:37.343 }, 00:23:37.343 "method": "bdev_nvme_attach_controller" 00:23:37.343 },{ 00:23:37.343 "params": { 00:23:37.343 "name": "Nvme4", 00:23:37.343 "trtype": "tcp", 00:23:37.343 "traddr": "10.0.0.2", 00:23:37.343 "adrfam": "ipv4", 00:23:37.343 "trsvcid": "4420", 00:23:37.343 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:37.343 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 },{ 00:23:37.344 "params": { 00:23:37.344 "name": "Nvme5", 00:23:37.344 "trtype": "tcp", 00:23:37.344 "traddr": "10.0.0.2", 00:23:37.344 "adrfam": "ipv4", 00:23:37.344 "trsvcid": "4420", 00:23:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:37.344 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 },{ 00:23:37.344 "params": { 00:23:37.344 "name": "Nvme6", 00:23:37.344 "trtype": "tcp", 00:23:37.344 "traddr": "10.0.0.2", 00:23:37.344 "adrfam": "ipv4", 00:23:37.344 "trsvcid": "4420", 00:23:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:37.344 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 },{ 00:23:37.344 "params": { 00:23:37.344 "name": "Nvme7", 00:23:37.344 "trtype": "tcp", 00:23:37.344 "traddr": "10.0.0.2", 00:23:37.344 "adrfam": "ipv4", 00:23:37.344 "trsvcid": "4420", 00:23:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:37.344 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 },{ 00:23:37.344 "params": { 00:23:37.344 "name": "Nvme8", 00:23:37.344 "trtype": "tcp", 00:23:37.344 "traddr": "10.0.0.2", 00:23:37.344 "adrfam": "ipv4", 00:23:37.344 "trsvcid": "4420", 00:23:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:37.344 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 },{ 00:23:37.344 "params": { 00:23:37.344 "name": "Nvme9", 00:23:37.344 "trtype": "tcp", 00:23:37.344 "traddr": "10.0.0.2", 00:23:37.344 "adrfam": "ipv4", 00:23:37.344 "trsvcid": "4420", 00:23:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:37.344 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 },{ 00:23:37.344 "params": { 00:23:37.344 "name": "Nvme10", 00:23:37.344 "trtype": "tcp", 00:23:37.344 "traddr": "10.0.0.2", 00:23:37.344 "adrfam": "ipv4", 00:23:37.344 "trsvcid": "4420", 00:23:37.344 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:37.344 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:37.344 "hdgst": false, 00:23:37.344 "ddgst": false 00:23:37.344 }, 00:23:37.344 "method": "bdev_nvme_attach_controller" 00:23:37.344 }' 00:23:37.344 [2024-07-25 10:12:16.345988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.344 [2024-07-25 10:12:16.411136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.260 Running I/O for 10 seconds... 00:23:39.260 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:39.260 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:39.260 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:39.260 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.260 10:12:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:39.260 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1367455 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1367455 ']' 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1367455 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1367455 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1367455' 00:23:39.540 killing process with pid 1367455 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1367455 00:23:39.540 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1367455 00:23:39.540 [2024-07-25 10:12:18.506990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.540 [2024-07-25 10:12:18.507237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507307] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.507355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b1fe0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508924] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.508999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.541 [2024-07-25 10:12:18.509073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.509121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21b24a0 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1a90 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.510997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe1f50 is same with the state(5) to be set 00:23:39.542 [2024-07-25 10:12:18.511828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.511998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.512227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe28f0 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.543 [2024-07-25 10:12:18.513453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513467] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.513679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe3750 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.516413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140a5d0 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.516533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.544 [2024-07-25 10:12:18.516596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.544 [2024-07-25 10:12:18.516604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142eaf0 is same with the state(5) to be set 00:23:39.544 [2024-07-25 10:12:18.516627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d0480 is same with the state(5) to be set 00:23:39.545 [2024-07-25 10:12:18.516715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417e30 is same with the state(5) to be set 00:23:39.545 [2024-07-25 10:12:18.516801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6250 is same with the state(5) to be set 00:23:39.545 [2024-07-25 10:12:18.516887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15adf00 is same with the state(5) to be set 00:23:39.545 [2024-07-25 10:12:18.516972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.516988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.516995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5c340 is same with the state(5) to be set 00:23:39.545 [2024-07-25 10:12:18.517055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.545 [2024-07-25 10:12:18.517121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d0770 is same with the state(5) to be set 00:23:39.545 [2024-07-25 10:12:18.517145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.545 [2024-07-25 10:12:18.517153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ae1a0 is same with the state(5) to be set 00:23:39.546 [2024-07-25 10:12:18.517239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:39.546 [2024-07-25 10:12:18.517294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.517301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567d00 is same with the state(5) to be set 00:23:39.546 [2024-07-25 10:12:18.519637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.519989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.519997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.546 [2024-07-25 10:12:18.520168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.546 [2024-07-25 10:12:18.520177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.520747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.520801] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x153e710 was disconnected and freed. reset controller. 00:23:39.547 [2024-07-25 10:12:18.539487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.539523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.547 [2024-07-25 10:12:18.539538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.547 [2024-07-25 10:12:18.539546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.539985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.548 [2024-07-25 10:12:18.540211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.548 [2024-07-25 10:12:18.540219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.540606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.540935] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14bd4d0 was disconnected and freed. reset controller. 00:23:39.549 [2024-07-25 10:12:18.541005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140a5d0 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142eaf0 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0480 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417e30 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6250 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15adf00 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5c340 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0770 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ae1a0 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1567d00 (9): Bad file descriptor 00:23:39.549 [2024-07-25 10:12:18.541351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.549 [2024-07-25 10:12:18.541490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.549 [2024-07-25 10:12:18.541499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.551 [2024-07-25 10:12:18.541958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.551 [2024-07-25 10:12:18.541965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.541974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.541981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.541990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.541997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542468] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x157a7e0 was disconnected and freed. reset controller. 00:23:39.552 [2024-07-25 10:12:18.542544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.552 [2024-07-25 10:12:18.542702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.552 [2024-07-25 10:12:18.542709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.542989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.542998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.553 [2024-07-25 10:12:18.543269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.553 [2024-07-25 10:12:18.543276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.543611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.543661] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1406270 was disconnected and freed. reset controller. 00:23:39.554 [2024-07-25 10:12:18.544933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.544948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.544964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.544974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.544985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.544994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.554 [2024-07-25 10:12:18.545226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.554 [2024-07-25 10:12:18.545236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.545453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.545460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.555 [2024-07-25 10:12:18.553485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.555 [2024-07-25 10:12:18.553492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.553705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.553778] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x153fc10 was disconnected and freed. reset controller. 00:23:39.556 [2024-07-25 10:12:18.555113] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.556 [2024-07-25 10:12:18.555139] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.556 [2024-07-25 10:12:18.555151] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.556 [2024-07-25 10:12:18.555163] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.556 [2024-07-25 10:12:18.555181] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.556 [2024-07-25 10:12:18.558934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:39.556 [2024-07-25 10:12:18.559161] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.556 [2024-07-25 10:12:18.559515] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:39.556 [2024-07-25 10:12:18.559823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:39.556 [2024-07-25 10:12:18.560416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.556 [2024-07-25 10:12:18.560456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15adf00 with addr=10.0.0.2, port=4420 00:23:39.556 [2024-07-25 10:12:18.560469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15adf00 is same with the state(5) to be set 00:23:39.556 [2024-07-25 10:12:18.560539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.556 [2024-07-25 10:12:18.560806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.556 [2024-07-25 10:12:18.560816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.560988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.560997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.557 [2024-07-25 10:12:18.561441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.557 [2024-07-25 10:12:18.561450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.561615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.561623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.562925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.562940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.562953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.562961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.562972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.562981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.562992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.558 [2024-07-25 10:12:18.563434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.558 [2024-07-25 10:12:18.563443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.563984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.563993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.564000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.564009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.564017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.564026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.564033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.565593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.565607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.559 [2024-07-25 10:12:18.565619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.559 [2024-07-25 10:12:18.565626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.565990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.565997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.560 [2024-07-25 10:12:18.566150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.560 [2024-07-25 10:12:18.566159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.566659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.566666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.568227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.568245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.568265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.568281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.568298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.561 [2024-07-25 10:12:18.568314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.561 [2024-07-25 10:12:18.568324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.562 [2024-07-25 10:12:18.568921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.562 [2024-07-25 10:12:18.568928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.568938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.568945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.568954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.568961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.568971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.568977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.568987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.568995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.569277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.569286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1531540 is same with the state(5) to be set 00:23:39.563 [2024-07-25 10:12:18.570848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.570984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.570994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.563 [2024-07-25 10:12:18.571128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.563 [2024-07-25 10:12:18.571136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.564 [2024-07-25 10:12:18.571762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.564 [2024-07-25 10:12:18.571770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.565 [2024-07-25 10:12:18.571947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.565 [2024-07-25 10:12:18.571955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1540fb0 is same with the state(5) to be set 00:23:39.565 [2024-07-25 10:12:18.573450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:39.565 [2024-07-25 10:12:18.573472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:39.565 [2024-07-25 10:12:18.573483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:39.565 [2024-07-25 10:12:18.573492] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:39.565 [2024-07-25 10:12:18.573501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:39.565 [2024-07-25 10:12:18.574085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.565 [2024-07-25 10:12:18.574101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1417e30 with addr=10.0.0.2, port=4420 00:23:39.565 [2024-07-25 10:12:18.574110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1417e30 is same with the state(5) to be set 00:23:39.565 [2024-07-25 10:12:18.574123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15adf00 (9): Bad file descriptor 00:23:39.565 [2024-07-25 10:12:18.574150] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.565 [2024-07-25 10:12:18.574164] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.565 [2024-07-25 10:12:18.574182] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.565 [2024-07-25 10:12:18.574194] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.565 [2024-07-25 10:12:18.574210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1417e30 (9): Bad file descriptor 00:23:39.565 [2024-07-25 10:12:18.574314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:39.565 [2024-07-25 10:12:18.574327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:39.565 task offset: 22656 on job bdev=Nvme7n1 fails 00:23:39.565 00:23:39.565 Latency(us) 00:23:39.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.565 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme1n1 ended in about 0.66 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme1n1 : 0.66 192.97 12.06 96.49 0.00 217575.82 24576.00 221074.77 00:23:39.565 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme2n1 ended in about 0.67 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme2n1 : 0.67 96.14 6.01 96.14 0.00 318030.93 25122.13 263891.63 00:23:39.565 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme3n1 ended in about 0.66 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme3n1 : 0.66 97.42 6.09 97.42 0.00 304039.25 37573.97 344282.45 00:23:39.565 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme4n1 ended in about 0.67 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme4n1 : 0.67 95.76 5.99 95.76 0.00 300029.87 24139.09 237677.23 00:23:39.565 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme5n1 ended in about 0.66 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme5n1 : 0.66 194.49 12.16 97.24 0.00 190052.41 24029.87 255153.49 00:23:39.565 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme6n1 ended in about 0.67 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme6n1 : 0.67 95.39 5.96 95.39 0.00 281865.81 24576.00 251658.24 00:23:39.565 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme7n1 ended in about 0.65 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme7n1 : 0.65 198.32 12.40 99.16 0.00 172836.41 24248.32 196608.00 00:23:39.565 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme8n1 ended in about 0.66 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme8n1 : 0.66 194.14 12.13 97.07 0.00 171130.60 18568.53 241172.48 00:23:39.565 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme9n1 ended in about 0.67 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme9n1 : 0.67 95.01 5.94 95.01 0.00 254510.93 44127.57 222822.40 00:23:39.565 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:39.565 Job: Nvme10n1 ended in about 0.66 seconds with error 00:23:39.565 Verification LBA range: start 0x0 length 0x400 00:23:39.565 Nvme10n1 : 0.66 97.63 6.10 97.63 0.00 235967.15 37573.97 351272.96 00:23:39.565 =================================================================================================================== 00:23:39.565 Total : 1357.28 84.83 967.32 0.00 235153.07 18568.53 351272.96 00:23:39.565 [2024-07-25 10:12:18.599363] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:39.565 [2024-07-25 10:12:18.599407] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:39.565 [2024-07-25 10:12:18.600027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.565 [2024-07-25 10:12:18.600048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1567d00 with addr=10.0.0.2, port=4420 00:23:39.565 [2024-07-25 10:12:18.600058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1567d00 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.600613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.600653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf5c340 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.600664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf5c340 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.601140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.601152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d0770 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.601160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d0770 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.601604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.601643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x140a5d0 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.601654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x140a5d0 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.602160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.602173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6250 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.602181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6250 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.602198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.602212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.602221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:39.566 [2024-07-25 10:12:18.603639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.603813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.603827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x142eaf0 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.603834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142eaf0 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.604435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.604474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ae1a0 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.604486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ae1a0 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.604833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.604849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d0480 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.604857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d0480 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.604870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1567d00 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.604883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5c340 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.604893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0770 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.604902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140a5d0 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.604911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6250 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.604920] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.604926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.604934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:39.566 [2024-07-25 10:12:18.604981] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.566 [2024-07-25 10:12:18.604996] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.566 [2024-07-25 10:12:18.605008] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.566 [2024-07-25 10:12:18.605020] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.566 [2024-07-25 10:12:18.605031] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.566 [2024-07-25 10:12:18.605041] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:39.566 [2024-07-25 10:12:18.605338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142eaf0 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.605368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ae1a0 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.605381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d0480 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.605389] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605415] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605438] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605463] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605470] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605565] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:39.566 [2024-07-25 10:12:18.605578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605664] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.605671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.605681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:39.566 [2024-07-25 10:12:18.605710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.605725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.566 [2024-07-25 10:12:18.606242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:39.566 [2024-07-25 10:12:18.606264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15adf00 with addr=10.0.0.2, port=4420 00:23:39.566 [2024-07-25 10:12:18.606273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15adf00 is same with the state(5) to be set 00:23:39.566 [2024-07-25 10:12:18.606307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15adf00 (9): Bad file descriptor 00:23:39.566 [2024-07-25 10:12:18.606336] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:39.566 [2024-07-25 10:12:18.606343] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:39.566 [2024-07-25 10:12:18.606351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:39.566 [2024-07-25 10:12:18.606380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:39.842 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:39.842 10:12:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1367709 00:23:40.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1367709) - No such process 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.784 rmmod nvme_tcp 00:23:40.784 rmmod nvme_fabrics 00:23:40.784 rmmod nvme_keyring 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.784 10:12:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.332 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.332 00:23:43.332 real 0m7.467s 00:23:43.332 user 0m17.303s 00:23:43.332 sys 0m1.220s 00:23:43.332 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.332 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:43.332 ************************************ 00:23:43.332 END TEST nvmf_shutdown_tc3 00:23:43.332 ************************************ 00:23:43.332 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:43.332 00:23:43.332 real 0m32.182s 00:23:43.332 user 1m14.183s 00:23:43.332 sys 0m9.412s 00:23:43.332 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.332 10:12:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.332 ************************************ 00:23:43.332 END TEST nvmf_shutdown 00:23:43.332 ************************************ 00:23:43.332 10:12:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:43.332 00:23:43.332 real 11m33.441s 00:23:43.332 user 24m48.407s 00:23:43.332 sys 3m24.505s 00:23:43.332 10:12:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.332 10:12:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:43.332 ************************************ 00:23:43.332 END TEST nvmf_target_extra 00:23:43.332 ************************************ 00:23:43.332 10:12:22 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:43.332 10:12:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.332 10:12:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.332 10:12:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:43.332 ************************************ 00:23:43.332 START TEST nvmf_host 00:23:43.332 ************************************ 00:23:43.332 10:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:43.332 * Looking for test storage... 00:23:43.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.333 ************************************ 00:23:43.333 START TEST nvmf_multicontroller 00:23:43.333 ************************************ 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:43.333 * Looking for test storage... 00:23:43.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.333 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.334 10:12:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.482 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:51.483 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:51.483 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:51.483 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:51.483 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:23:51.483 00:23:51.483 --- 10.0.0.2 ping statistics --- 00:23:51.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.483 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.488 ms 00:23:51.483 00:23:51.483 --- 10.0.0.1 ping statistics --- 00:23:51.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.483 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1372791 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1372791 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1372791 ']' 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.483 10:12:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 [2024-07-25 10:12:29.526103] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:51.484 [2024-07-25 10:12:29.526191] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.484 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.484 [2024-07-25 10:12:29.614293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:51.484 [2024-07-25 10:12:29.707131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.484 [2024-07-25 10:12:29.707192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.484 [2024-07-25 10:12:29.707207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.484 [2024-07-25 10:12:29.707215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.484 [2024-07-25 10:12:29.707221] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.484 [2024-07-25 10:12:29.707370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:51.484 [2024-07-25 10:12:29.707652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:51.484 [2024-07-25 10:12:29.707653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 [2024-07-25 10:12:30.348502] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 Malloc0 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 [2024-07-25 10:12:30.411994] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 [2024-07-25 10:12:30.423933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 Malloc1 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1372830 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1372830 /var/tmp/bdevperf.sock 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1372830 ']' 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.484 10:12:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.424 NVMe0n1 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.424 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.686 1 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.686 request: 00:23:52.686 { 00:23:52.686 "name": "NVMe0", 00:23:52.686 "trtype": "tcp", 00:23:52.686 "traddr": "10.0.0.2", 00:23:52.686 "adrfam": "ipv4", 00:23:52.686 "trsvcid": "4420", 00:23:52.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.686 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:52.686 "hostaddr": "10.0.0.2", 00:23:52.686 "hostsvcid": "60000", 00:23:52.686 "prchk_reftag": false, 00:23:52.686 "prchk_guard": false, 00:23:52.686 "hdgst": false, 00:23:52.686 "ddgst": false, 00:23:52.686 "method": "bdev_nvme_attach_controller", 00:23:52.686 "req_id": 1 00:23:52.686 } 00:23:52.686 Got JSON-RPC error response 00:23:52.686 response: 00:23:52.686 { 00:23:52.686 "code": -114, 00:23:52.686 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:52.686 } 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.686 request: 00:23:52.686 { 00:23:52.686 "name": "NVMe0", 00:23:52.686 "trtype": "tcp", 00:23:52.686 "traddr": "10.0.0.2", 00:23:52.686 "adrfam": "ipv4", 00:23:52.686 "trsvcid": "4420", 00:23:52.686 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:52.686 "hostaddr": "10.0.0.2", 00:23:52.686 "hostsvcid": "60000", 00:23:52.686 "prchk_reftag": false, 00:23:52.686 "prchk_guard": false, 00:23:52.686 "hdgst": false, 00:23:52.686 "ddgst": false, 00:23:52.686 "method": "bdev_nvme_attach_controller", 00:23:52.686 "req_id": 1 00:23:52.686 } 00:23:52.686 Got JSON-RPC error response 00:23:52.686 response: 00:23:52.686 { 00:23:52.686 "code": -114, 00:23:52.686 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:52.686 } 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.686 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.686 request: 00:23:52.686 { 00:23:52.686 "name": "NVMe0", 00:23:52.686 "trtype": "tcp", 00:23:52.686 "traddr": "10.0.0.2", 00:23:52.687 "adrfam": "ipv4", 00:23:52.687 "trsvcid": "4420", 00:23:52.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.687 "hostaddr": "10.0.0.2", 00:23:52.687 "hostsvcid": "60000", 00:23:52.687 "prchk_reftag": false, 00:23:52.687 "prchk_guard": false, 00:23:52.687 "hdgst": false, 00:23:52.687 "ddgst": false, 00:23:52.687 "multipath": "disable", 00:23:52.687 "method": "bdev_nvme_attach_controller", 00:23:52.687 "req_id": 1 00:23:52.687 } 00:23:52.687 Got JSON-RPC error response 00:23:52.687 response: 00:23:52.687 { 00:23:52.687 "code": -114, 00:23:52.687 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:52.687 } 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.687 request: 00:23:52.687 { 00:23:52.687 "name": "NVMe0", 00:23:52.687 "trtype": "tcp", 00:23:52.687 "traddr": "10.0.0.2", 00:23:52.687 "adrfam": "ipv4", 00:23:52.687 "trsvcid": "4420", 00:23:52.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:52.687 "hostaddr": "10.0.0.2", 00:23:52.687 "hostsvcid": "60000", 00:23:52.687 "prchk_reftag": false, 00:23:52.687 "prchk_guard": false, 00:23:52.687 "hdgst": false, 00:23:52.687 "ddgst": false, 00:23:52.687 "multipath": "failover", 00:23:52.687 "method": "bdev_nvme_attach_controller", 00:23:52.687 "req_id": 1 00:23:52.687 } 00:23:52.687 Got JSON-RPC error response 00:23:52.687 response: 00:23:52.687 { 00:23:52.687 "code": -114, 00:23:52.687 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:52.687 } 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.687 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.687 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.948 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:52.948 10:12:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.336 0 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1372830 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1372830 ']' 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1372830 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1372830 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1372830' 00:23:54.336 killing process with pid 1372830 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1372830 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1372830 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.336 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:54.337 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:54.337 [2024-07-25 10:12:30.555306] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:54.337 [2024-07-25 10:12:30.555374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372830 ] 00:23:54.337 EAL: No free 2048 kB hugepages reported on node 1 00:23:54.337 [2024-07-25 10:12:30.616238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.337 [2024-07-25 10:12:30.680878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.337 [2024-07-25 10:12:31.962698] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 4d3121ba-0f98-4551-99f6-9d4fb3e0bf47 already exists 00:23:54.337 [2024-07-25 10:12:31.962727] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:4d3121ba-0f98-4551-99f6-9d4fb3e0bf47 alias for bdev NVMe1n1 00:23:54.337 [2024-07-25 10:12:31.962736] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:54.337 Running I/O for 1 seconds... 00:23:54.337 00:23:54.337 Latency(us) 00:23:54.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.337 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:54.337 NVMe0n1 : 1.00 20563.31 80.33 0.00 0.00 6212.32 3495.25 20097.71 00:23:54.337 =================================================================================================================== 00:23:54.337 Total : 20563.31 80.33 0.00 0.00 6212.32 3495.25 20097.71 00:23:54.337 Received shutdown signal, test time was about 1.000000 seconds 00:23:54.337 00:23:54.337 Latency(us) 00:23:54.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.337 =================================================================================================================== 00:23:54.337 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:54.337 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:54.337 rmmod nvme_tcp 00:23:54.337 rmmod nvme_fabrics 00:23:54.337 rmmod nvme_keyring 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1372791 ']' 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1372791 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1372791 ']' 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1372791 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1372791 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1372791' 00:23:54.337 killing process with pid 1372791 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1372791 00:23:54.337 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1372791 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.598 10:12:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.148 10:12:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:57.148 00:23:57.148 real 0m13.418s 00:23:57.148 user 0m16.787s 00:23:57.148 sys 0m6.015s 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 ************************************ 00:23:57.149 END TEST nvmf_multicontroller 00:23:57.149 ************************************ 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.149 ************************************ 00:23:57.149 START TEST nvmf_aer 00:23:57.149 ************************************ 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:57.149 * Looking for test storage... 00:23:57.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:57.149 10:12:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:03.747 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:03.747 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:03.747 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:03.747 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:03.747 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.008 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.008 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.008 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:04.008 10:12:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:04.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:24:04.008 00:24:04.008 --- 10.0.0.2 ping statistics --- 00:24:04.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.008 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.399 ms 00:24:04.008 00:24:04.008 --- 10.0.0.1 ping statistics --- 00:24:04.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.008 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:04.008 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1377577 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1377577 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1377577 ']' 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.269 10:12:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.269 [2024-07-25 10:12:43.228062] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:04.269 [2024-07-25 10:12:43.228113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.269 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.269 [2024-07-25 10:12:43.298634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.269 [2024-07-25 10:12:43.370652] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.269 [2024-07-25 10:12:43.370694] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.269 [2024-07-25 10:12:43.370702] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.269 [2024-07-25 10:12:43.370708] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.269 [2024-07-25 10:12:43.370714] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.269 [2024-07-25 10:12:43.370854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.269 [2024-07-25 10:12:43.370967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.269 [2024-07-25 10:12:43.371122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.269 [2024-07-25 10:12:43.371123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.209 [2024-07-25 10:12:44.054139] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:05.209 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.210 Malloc0 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.210 [2024-07-25 10:12:44.113587] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.210 [ 00:24:05.210 { 00:24:05.210 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:05.210 "subtype": "Discovery", 00:24:05.210 "listen_addresses": [], 00:24:05.210 "allow_any_host": true, 00:24:05.210 "hosts": [] 00:24:05.210 }, 00:24:05.210 { 00:24:05.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.210 "subtype": "NVMe", 00:24:05.210 "listen_addresses": [ 00:24:05.210 { 00:24:05.210 "trtype": "TCP", 00:24:05.210 "adrfam": "IPv4", 00:24:05.210 "traddr": "10.0.0.2", 00:24:05.210 "trsvcid": "4420" 00:24:05.210 } 00:24:05.210 ], 00:24:05.210 "allow_any_host": true, 00:24:05.210 "hosts": [], 00:24:05.210 "serial_number": "SPDK00000000000001", 00:24:05.210 "model_number": "SPDK bdev Controller", 00:24:05.210 "max_namespaces": 2, 00:24:05.210 "min_cntlid": 1, 00:24:05.210 "max_cntlid": 65519, 00:24:05.210 "namespaces": [ 00:24:05.210 { 00:24:05.210 "nsid": 1, 00:24:05.210 "bdev_name": "Malloc0", 00:24:05.210 "name": "Malloc0", 00:24:05.210 "nguid": "3A19D6BCC97B4EB9AFAD208831800CEB", 00:24:05.210 "uuid": "3a19d6bc-c97b-4eb9-afad-208831800ceb" 00:24:05.210 } 00:24:05.210 ] 00:24:05.210 } 00:24:05.210 ] 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1377853 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:05.210 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:05.210 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.472 Malloc1 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.472 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.472 [ 00:24:05.472 { 00:24:05.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:05.472 "subtype": "Discovery", 00:24:05.472 "listen_addresses": [], 00:24:05.472 "allow_any_host": true, 00:24:05.472 "hosts": [] 00:24:05.472 }, 00:24:05.472 { 00:24:05.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.472 "subtype": "NVMe", 00:24:05.472 "listen_addresses": [ 00:24:05.472 { 00:24:05.472 "trtype": "TCP", 00:24:05.472 "adrfam": "IPv4", 00:24:05.472 "traddr": "10.0.0.2", 00:24:05.472 "trsvcid": "4420" 00:24:05.472 } 00:24:05.472 ], 00:24:05.472 "allow_any_host": true, 00:24:05.472 "hosts": [], 00:24:05.472 "serial_number": "SPDK00000000000001", 00:24:05.472 "model_number": "SPDK bdev Controller", 00:24:05.472 "max_namespaces": 2, 00:24:05.472 "min_cntlid": 1, 00:24:05.472 "max_cntlid": 65519, 00:24:05.472 "namespaces": [ 00:24:05.472 { 00:24:05.472 "nsid": 1, 00:24:05.472 "bdev_name": "Malloc0", 00:24:05.472 "name": "Malloc0", 00:24:05.472 "nguid": "3A19D6BCC97B4EB9AFAD208831800CEB", 00:24:05.472 "uuid": "3a19d6bc-c97b-4eb9-afad-208831800ceb" 00:24:05.472 }, 00:24:05.472 { 00:24:05.472 "nsid": 2, 00:24:05.472 "bdev_name": "Malloc1", 00:24:05.472 "name": "Malloc1", 00:24:05.472 "nguid": "EEE4EA617D244828BB14C5C393D88C72", 00:24:05.472 "uuid": "eee4ea61-7d24-4828-bb14-c5c393d88c72" 00:24:05.472 } 00:24:05.473 ] 00:24:05.473 } 00:24:05.473 ] 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1377853 00:24:05.473 Asynchronous Event Request test 00:24:05.473 Attaching to 10.0.0.2 00:24:05.473 Attached to 10.0.0.2 00:24:05.473 Registering asynchronous event callbacks... 00:24:05.473 Starting namespace attribute notice tests for all controllers... 00:24:05.473 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:05.473 aer_cb - Changed Namespace 00:24:05.473 Cleaning up... 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:05.473 rmmod nvme_tcp 00:24:05.473 rmmod nvme_fabrics 00:24:05.473 rmmod nvme_keyring 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1377577 ']' 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1377577 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1377577 ']' 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1377577 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1377577 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1377577' 00:24:05.473 killing process with pid 1377577 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1377577 00:24:05.473 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1377577 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.773 10:12:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.684 10:12:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.685 00:24:07.685 real 0m11.056s 00:24:07.685 user 0m7.538s 00:24:07.685 sys 0m5.851s 00:24:07.685 10:12:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:07.685 10:12:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:07.685 ************************************ 00:24:07.685 END TEST nvmf_aer 00:24:07.685 ************************************ 00:24:07.944 10:12:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:07.944 10:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:07.944 10:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:07.944 10:12:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.944 ************************************ 00:24:07.944 START TEST nvmf_async_init 00:24:07.944 ************************************ 00:24:07.944 10:12:46 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:07.944 * Looking for test storage... 00:24:07.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.944 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=fa1dc4b232554085bd8801b11c3bb491 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.945 10:12:47 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:16.086 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:16.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:16.087 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:16.087 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:16.087 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:16.087 10:12:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:16.087 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.087 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.481 ms 00:24:16.087 00:24:16.087 --- 10.0.0.2 ping statistics --- 00:24:16.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.087 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:16.087 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.087 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:24:16.087 00:24:16.087 --- 10.0.0.1 ping statistics --- 00:24:16.087 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.087 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1382084 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1382084 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1382084 ']' 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:16.087 10:12:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.087 [2024-07-25 10:12:54.307910] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:16.087 [2024-07-25 10:12:54.307975] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.087 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.088 [2024-07-25 10:12:54.381234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.088 [2024-07-25 10:12:54.454464] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.088 [2024-07-25 10:12:54.454506] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.088 [2024-07-25 10:12:54.454514] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.088 [2024-07-25 10:12:54.454520] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.088 [2024-07-25 10:12:54.454526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.088 [2024-07-25 10:12:54.454547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 [2024-07-25 10:12:55.141525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 null0 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g fa1dc4b232554085bd8801b11c3bb491 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.088 [2024-07-25 10:12:55.201810] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.088 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.349 nvme0n1 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.349 [ 00:24:16.349 { 00:24:16.349 "name": "nvme0n1", 00:24:16.349 "aliases": [ 00:24:16.349 "fa1dc4b2-3255-4085-bd88-01b11c3bb491" 00:24:16.349 ], 00:24:16.349 "product_name": "NVMe disk", 00:24:16.349 "block_size": 512, 00:24:16.349 "num_blocks": 2097152, 00:24:16.349 "uuid": "fa1dc4b2-3255-4085-bd88-01b11c3bb491", 00:24:16.349 "assigned_rate_limits": { 00:24:16.349 "rw_ios_per_sec": 0, 00:24:16.349 "rw_mbytes_per_sec": 0, 00:24:16.349 "r_mbytes_per_sec": 0, 00:24:16.349 "w_mbytes_per_sec": 0 00:24:16.349 }, 00:24:16.349 "claimed": false, 00:24:16.349 "zoned": false, 00:24:16.349 "supported_io_types": { 00:24:16.349 "read": true, 00:24:16.349 "write": true, 00:24:16.349 "unmap": false, 00:24:16.349 "flush": true, 00:24:16.349 "reset": true, 00:24:16.349 "nvme_admin": true, 00:24:16.349 "nvme_io": true, 00:24:16.349 "nvme_io_md": false, 00:24:16.349 "write_zeroes": true, 00:24:16.349 "zcopy": false, 00:24:16.349 "get_zone_info": false, 00:24:16.349 "zone_management": false, 00:24:16.349 "zone_append": false, 00:24:16.349 "compare": true, 00:24:16.349 "compare_and_write": true, 00:24:16.349 "abort": true, 00:24:16.349 "seek_hole": false, 00:24:16.349 "seek_data": false, 00:24:16.349 "copy": true, 00:24:16.349 "nvme_iov_md": false 00:24:16.349 }, 00:24:16.349 "memory_domains": [ 00:24:16.349 { 00:24:16.349 "dma_device_id": "system", 00:24:16.349 "dma_device_type": 1 00:24:16.349 } 00:24:16.349 ], 00:24:16.349 "driver_specific": { 00:24:16.349 "nvme": [ 00:24:16.349 { 00:24:16.349 "trid": { 00:24:16.349 "trtype": "TCP", 00:24:16.349 "adrfam": "IPv4", 00:24:16.349 "traddr": "10.0.0.2", 00:24:16.349 "trsvcid": "4420", 00:24:16.349 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:16.349 }, 00:24:16.349 "ctrlr_data": { 00:24:16.349 "cntlid": 1, 00:24:16.349 "vendor_id": "0x8086", 00:24:16.349 "model_number": "SPDK bdev Controller", 00:24:16.349 "serial_number": "00000000000000000000", 00:24:16.349 "firmware_revision": "24.09", 00:24:16.349 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.349 "oacs": { 00:24:16.349 "security": 0, 00:24:16.349 "format": 0, 00:24:16.349 "firmware": 0, 00:24:16.349 "ns_manage": 0 00:24:16.349 }, 00:24:16.349 "multi_ctrlr": true, 00:24:16.349 "ana_reporting": false 00:24:16.349 }, 00:24:16.349 "vs": { 00:24:16.349 "nvme_version": "1.3" 00:24:16.349 }, 00:24:16.349 "ns_data": { 00:24:16.349 "id": 1, 00:24:16.349 "can_share": true 00:24:16.349 } 00:24:16.349 } 00:24:16.349 ], 00:24:16.349 "mp_policy": "active_passive" 00:24:16.349 } 00:24:16.349 } 00:24:16.349 ] 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.349 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.349 [2024-07-25 10:12:55.478360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:16.349 [2024-07-25 10:12:55.478421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966f40 (9): Bad file descriptor 00:24:16.610 [2024-07-25 10:12:55.610298] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.610 [ 00:24:16.610 { 00:24:16.610 "name": "nvme0n1", 00:24:16.610 "aliases": [ 00:24:16.610 "fa1dc4b2-3255-4085-bd88-01b11c3bb491" 00:24:16.610 ], 00:24:16.610 "product_name": "NVMe disk", 00:24:16.610 "block_size": 512, 00:24:16.610 "num_blocks": 2097152, 00:24:16.610 "uuid": "fa1dc4b2-3255-4085-bd88-01b11c3bb491", 00:24:16.610 "assigned_rate_limits": { 00:24:16.610 "rw_ios_per_sec": 0, 00:24:16.610 "rw_mbytes_per_sec": 0, 00:24:16.610 "r_mbytes_per_sec": 0, 00:24:16.610 "w_mbytes_per_sec": 0 00:24:16.610 }, 00:24:16.610 "claimed": false, 00:24:16.610 "zoned": false, 00:24:16.610 "supported_io_types": { 00:24:16.610 "read": true, 00:24:16.610 "write": true, 00:24:16.610 "unmap": false, 00:24:16.610 "flush": true, 00:24:16.610 "reset": true, 00:24:16.610 "nvme_admin": true, 00:24:16.610 "nvme_io": true, 00:24:16.610 "nvme_io_md": false, 00:24:16.610 "write_zeroes": true, 00:24:16.610 "zcopy": false, 00:24:16.610 "get_zone_info": false, 00:24:16.610 "zone_management": false, 00:24:16.610 "zone_append": false, 00:24:16.610 "compare": true, 00:24:16.610 "compare_and_write": true, 00:24:16.610 "abort": true, 00:24:16.610 "seek_hole": false, 00:24:16.610 "seek_data": false, 00:24:16.610 "copy": true, 00:24:16.610 "nvme_iov_md": false 00:24:16.610 }, 00:24:16.610 "memory_domains": [ 00:24:16.610 { 00:24:16.610 "dma_device_id": "system", 00:24:16.610 "dma_device_type": 1 00:24:16.610 } 00:24:16.610 ], 00:24:16.610 "driver_specific": { 00:24:16.610 "nvme": [ 00:24:16.610 { 00:24:16.610 "trid": { 00:24:16.610 "trtype": "TCP", 00:24:16.610 "adrfam": "IPv4", 00:24:16.610 "traddr": "10.0.0.2", 00:24:16.610 "trsvcid": "4420", 00:24:16.610 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:16.610 }, 00:24:16.610 "ctrlr_data": { 00:24:16.610 "cntlid": 2, 00:24:16.610 "vendor_id": "0x8086", 00:24:16.610 "model_number": "SPDK bdev Controller", 00:24:16.610 "serial_number": "00000000000000000000", 00:24:16.610 "firmware_revision": "24.09", 00:24:16.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.610 "oacs": { 00:24:16.610 "security": 0, 00:24:16.610 "format": 0, 00:24:16.610 "firmware": 0, 00:24:16.610 "ns_manage": 0 00:24:16.610 }, 00:24:16.610 "multi_ctrlr": true, 00:24:16.610 "ana_reporting": false 00:24:16.610 }, 00:24:16.610 "vs": { 00:24:16.610 "nvme_version": "1.3" 00:24:16.610 }, 00:24:16.610 "ns_data": { 00:24:16.610 "id": 1, 00:24:16.610 "can_share": true 00:24:16.610 } 00:24:16.610 } 00:24:16.610 ], 00:24:16.610 "mp_policy": "active_passive" 00:24:16.610 } 00:24:16.610 } 00:24:16.610 ] 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.5cWpx3hbQk 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.5cWpx3hbQk 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.610 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.610 [2024-07-25 10:12:55.683013] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:16.611 [2024-07-25 10:12:55.683137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5cWpx3hbQk 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.611 [2024-07-25 10:12:55.695035] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5cWpx3hbQk 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.611 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.611 [2024-07-25 10:12:55.707089] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.611 [2024-07-25 10:12:55.707126] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:16.872 nvme0n1 00:24:16.872 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.872 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:16.872 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.872 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.872 [ 00:24:16.872 { 00:24:16.872 "name": "nvme0n1", 00:24:16.872 "aliases": [ 00:24:16.872 "fa1dc4b2-3255-4085-bd88-01b11c3bb491" 00:24:16.872 ], 00:24:16.872 "product_name": "NVMe disk", 00:24:16.872 "block_size": 512, 00:24:16.872 "num_blocks": 2097152, 00:24:16.872 "uuid": "fa1dc4b2-3255-4085-bd88-01b11c3bb491", 00:24:16.872 "assigned_rate_limits": { 00:24:16.872 "rw_ios_per_sec": 0, 00:24:16.872 "rw_mbytes_per_sec": 0, 00:24:16.872 "r_mbytes_per_sec": 0, 00:24:16.872 "w_mbytes_per_sec": 0 00:24:16.872 }, 00:24:16.872 "claimed": false, 00:24:16.872 "zoned": false, 00:24:16.872 "supported_io_types": { 00:24:16.872 "read": true, 00:24:16.872 "write": true, 00:24:16.872 "unmap": false, 00:24:16.872 "flush": true, 00:24:16.872 "reset": true, 00:24:16.872 "nvme_admin": true, 00:24:16.872 "nvme_io": true, 00:24:16.872 "nvme_io_md": false, 00:24:16.872 "write_zeroes": true, 00:24:16.872 "zcopy": false, 00:24:16.872 "get_zone_info": false, 00:24:16.872 "zone_management": false, 00:24:16.872 "zone_append": false, 00:24:16.872 "compare": true, 00:24:16.872 "compare_and_write": true, 00:24:16.872 "abort": true, 00:24:16.872 "seek_hole": false, 00:24:16.872 "seek_data": false, 00:24:16.872 "copy": true, 00:24:16.872 "nvme_iov_md": false 00:24:16.872 }, 00:24:16.872 "memory_domains": [ 00:24:16.872 { 00:24:16.872 "dma_device_id": "system", 00:24:16.872 "dma_device_type": 1 00:24:16.872 } 00:24:16.872 ], 00:24:16.872 "driver_specific": { 00:24:16.872 "nvme": [ 00:24:16.872 { 00:24:16.872 "trid": { 00:24:16.872 "trtype": "TCP", 00:24:16.872 "adrfam": "IPv4", 00:24:16.873 "traddr": "10.0.0.2", 00:24:16.873 "trsvcid": "4421", 00:24:16.873 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:16.873 }, 00:24:16.873 "ctrlr_data": { 00:24:16.873 "cntlid": 3, 00:24:16.873 "vendor_id": "0x8086", 00:24:16.873 "model_number": "SPDK bdev Controller", 00:24:16.873 "serial_number": "00000000000000000000", 00:24:16.873 "firmware_revision": "24.09", 00:24:16.873 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:16.873 "oacs": { 00:24:16.873 "security": 0, 00:24:16.873 "format": 0, 00:24:16.873 "firmware": 0, 00:24:16.873 "ns_manage": 0 00:24:16.873 }, 00:24:16.873 "multi_ctrlr": true, 00:24:16.873 "ana_reporting": false 00:24:16.873 }, 00:24:16.873 "vs": { 00:24:16.873 "nvme_version": "1.3" 00:24:16.873 }, 00:24:16.873 "ns_data": { 00:24:16.873 "id": 1, 00:24:16.873 "can_share": true 00:24:16.873 } 00:24:16.873 } 00:24:16.873 ], 00:24:16.873 "mp_policy": "active_passive" 00:24:16.873 } 00:24:16.873 } 00:24:16.873 ] 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.5cWpx3hbQk 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.873 rmmod nvme_tcp 00:24:16.873 rmmod nvme_fabrics 00:24:16.873 rmmod nvme_keyring 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1382084 ']' 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1382084 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1382084 ']' 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1382084 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1382084 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1382084' 00:24:16.873 killing process with pid 1382084 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1382084 00:24:16.873 [2024-07-25 10:12:55.964391] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:16.873 [2024-07-25 10:12:55.964418] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:16.873 10:12:55 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1382084 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.135 10:12:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.045 10:12:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:19.045 00:24:19.045 real 0m11.266s 00:24:19.045 user 0m4.056s 00:24:19.045 sys 0m5.700s 00:24:19.045 10:12:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.045 10:12:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:19.045 ************************************ 00:24:19.045 END TEST nvmf_async_init 00:24:19.045 ************************************ 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.307 ************************************ 00:24:19.307 START TEST dma 00:24:19.307 ************************************ 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:19.307 * Looking for test storage... 00:24:19.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.307 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:19.308 00:24:19.308 real 0m0.132s 00:24:19.308 user 0m0.059s 00:24:19.308 sys 0m0.079s 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:19.308 ************************************ 00:24:19.308 END TEST dma 00:24:19.308 ************************************ 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:19.308 10:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.569 ************************************ 00:24:19.569 START TEST nvmf_identify 00:24:19.569 ************************************ 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:19.569 * Looking for test storage... 00:24:19.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.569 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:19.570 10:12:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:27.714 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.714 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:27.715 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:27.715 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:27.715 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:27.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:24:27.715 00:24:27.715 --- 10.0.0.2 ping statistics --- 00:24:27.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.715 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:24:27.715 00:24:27.715 --- 10.0.0.1 ping statistics --- 00:24:27.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.715 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1386565 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1386565 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1386565 ']' 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:27.715 10:13:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.715 [2024-07-25 10:13:05.894181] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:27.715 [2024-07-25 10:13:05.894256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.715 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.715 [2024-07-25 10:13:05.966407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.715 [2024-07-25 10:13:06.042085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.715 [2024-07-25 10:13:06.042121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.715 [2024-07-25 10:13:06.042129] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.715 [2024-07-25 10:13:06.042136] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.715 [2024-07-25 10:13:06.042142] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.715 [2024-07-25 10:13:06.042279] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.715 [2024-07-25 10:13:06.042379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.715 [2024-07-25 10:13:06.042533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:27.715 [2024-07-25 10:13:06.042534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.715 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:27.715 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:27.715 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.715 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.715 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.715 [2024-07-25 10:13:06.689026] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.715 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 Malloc0 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 [2024-07-25 10:13:06.788588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:27.716 [ 00:24:27.716 { 00:24:27.716 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.716 "subtype": "Discovery", 00:24:27.716 "listen_addresses": [ 00:24:27.716 { 00:24:27.716 "trtype": "TCP", 00:24:27.716 "adrfam": "IPv4", 00:24:27.716 "traddr": "10.0.0.2", 00:24:27.716 "trsvcid": "4420" 00:24:27.716 } 00:24:27.716 ], 00:24:27.716 "allow_any_host": true, 00:24:27.716 "hosts": [] 00:24:27.716 }, 00:24:27.716 { 00:24:27.716 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.716 "subtype": "NVMe", 00:24:27.716 "listen_addresses": [ 00:24:27.716 { 00:24:27.716 "trtype": "TCP", 00:24:27.716 "adrfam": "IPv4", 00:24:27.716 "traddr": "10.0.0.2", 00:24:27.716 "trsvcid": "4420" 00:24:27.716 } 00:24:27.716 ], 00:24:27.716 "allow_any_host": true, 00:24:27.716 "hosts": [], 00:24:27.716 "serial_number": "SPDK00000000000001", 00:24:27.716 "model_number": "SPDK bdev Controller", 00:24:27.716 "max_namespaces": 32, 00:24:27.716 "min_cntlid": 1, 00:24:27.716 "max_cntlid": 65519, 00:24:27.716 "namespaces": [ 00:24:27.716 { 00:24:27.716 "nsid": 1, 00:24:27.716 "bdev_name": "Malloc0", 00:24:27.716 "name": "Malloc0", 00:24:27.716 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:27.716 "eui64": "ABCDEF0123456789", 00:24:27.716 "uuid": "19449a3d-466c-4c8e-b502-b8ee4b410b6b" 00:24:27.716 } 00:24:27.716 ] 00:24:27.716 } 00:24:27.716 ] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.716 10:13:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:27.980 [2024-07-25 10:13:06.850362] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:27.980 [2024-07-25 10:13:06.850404] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386891 ] 00:24:27.980 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.980 [2024-07-25 10:13:06.883783] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:27.980 [2024-07-25 10:13:06.883827] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:27.980 [2024-07-25 10:13:06.883832] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:27.980 [2024-07-25 10:13:06.883842] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:27.980 [2024-07-25 10:13:06.883849] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:27.980 [2024-07-25 10:13:06.887229] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:27.980 [2024-07-25 10:13:06.887255] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x130bec0 0 00:24:27.980 [2024-07-25 10:13:06.895209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:27.980 [2024-07-25 10:13:06.895225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:27.980 [2024-07-25 10:13:06.895230] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:27.980 [2024-07-25 10:13:06.895234] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:27.980 [2024-07-25 10:13:06.895270] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.980 [2024-07-25 10:13:06.895276] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.980 [2024-07-25 10:13:06.895280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.980 [2024-07-25 10:13:06.895292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:27.980 [2024-07-25 10:13:06.895309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.980 [2024-07-25 10:13:06.902211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.980 [2024-07-25 10:13:06.902220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.980 [2024-07-25 10:13:06.902223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.980 [2024-07-25 10:13:06.902228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.980 [2024-07-25 10:13:06.902237] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:27.980 [2024-07-25 10:13:06.902244] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:27.980 [2024-07-25 10:13:06.902249] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:27.980 [2024-07-25 10:13:06.902261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.980 [2024-07-25 10:13:06.902269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.980 [2024-07-25 10:13:06.902272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.902280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.902293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.902542] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.902549] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.902553] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.902557] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.902565] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:27.981 [2024-07-25 10:13:06.902573] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:27.981 [2024-07-25 10:13:06.902581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.902584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.902588] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.902595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.902607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.902858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.902865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.902869] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.902872] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.902878] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:27.981 [2024-07-25 10:13:06.902885] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:27.981 [2024-07-25 10:13:06.902892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.902895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.902899] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.902905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.902916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.903149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.903155] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.903159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903162] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.903167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:27.981 [2024-07-25 10:13:06.903176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.903190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.903210] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.903420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.903427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.903430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.903439] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:27.981 [2024-07-25 10:13:06.903444] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:27.981 [2024-07-25 10:13:06.903451] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:27.981 [2024-07-25 10:13:06.903556] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:27.981 [2024-07-25 10:13:06.903561] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:27.981 [2024-07-25 10:13:06.903569] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903573] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.903583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.903594] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.903846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.903852] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.903855] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.903864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:27.981 [2024-07-25 10:13:06.903873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.903880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.903887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.903897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.904138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.904145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.904148] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.904152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.904156] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:27.981 [2024-07-25 10:13:06.904161] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:27.981 [2024-07-25 10:13:06.904168] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:27.981 [2024-07-25 10:13:06.904179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:27.981 [2024-07-25 10:13:06.904188] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.904192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.904199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.981 [2024-07-25 10:13:06.904217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.904488] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.981 [2024-07-25 10:13:06.904495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.981 [2024-07-25 10:13:06.904499] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.904503] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bec0): datao=0, datal=4096, cccid=0 00:24:27.981 [2024-07-25 10:13:06.904508] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138ee40) on tqpair(0x130bec0): expected_datao=0, payload_size=4096 00:24:27.981 [2024-07-25 10:13:06.904512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.904574] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.904578] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.948208] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.948219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.948222] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.948226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.981 [2024-07-25 10:13:06.948234] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:27.981 [2024-07-25 10:13:06.948238] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:27.981 [2024-07-25 10:13:06.948243] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:27.981 [2024-07-25 10:13:06.948248] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:27.981 [2024-07-25 10:13:06.948252] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:27.981 [2024-07-25 10:13:06.948257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:27.981 [2024-07-25 10:13:06.948265] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:27.981 [2024-07-25 10:13:06.948276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.948280] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.981 [2024-07-25 10:13:06.948284] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.981 [2024-07-25 10:13:06.948292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:27.981 [2024-07-25 10:13:06.948304] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.981 [2024-07-25 10:13:06.948574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.981 [2024-07-25 10:13:06.948581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.981 [2024-07-25 10:13:06.948584] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.982 [2024-07-25 10:13:06.948598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948602] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.948612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.982 [2024-07-25 10:13:06.948618] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.948631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.982 [2024-07-25 10:13:06.948637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.948650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.982 [2024-07-25 10:13:06.948656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948663] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.948668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.982 [2024-07-25 10:13:06.948673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:27.982 [2024-07-25 10:13:06.948684] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:27.982 [2024-07-25 10:13:06.948691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.948694] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.948701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.982 [2024-07-25 10:13:06.948714] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138ee40, cid 0, qid 0 00:24:27.982 [2024-07-25 10:13:06.948719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138efc0, cid 1, qid 0 00:24:27.982 [2024-07-25 10:13:06.948724] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f140, cid 2, qid 0 00:24:27.982 [2024-07-25 10:13:06.948729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.982 [2024-07-25 10:13:06.948733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f440, cid 4, qid 0 00:24:27.982 [2024-07-25 10:13:06.948996] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.982 [2024-07-25 10:13:06.949002] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.982 [2024-07-25 10:13:06.949006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f440) on tqpair=0x130bec0 00:24:27.982 [2024-07-25 10:13:06.949014] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:27.982 [2024-07-25 10:13:06.949019] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:27.982 [2024-07-25 10:13:06.949032] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949036] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.949043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.982 [2024-07-25 10:13:06.949054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f440, cid 4, qid 0 00:24:27.982 [2024-07-25 10:13:06.949305] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.982 [2024-07-25 10:13:06.949313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.982 [2024-07-25 10:13:06.949316] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949320] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bec0): datao=0, datal=4096, cccid=4 00:24:27.982 [2024-07-25 10:13:06.949324] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138f440) on tqpair(0x130bec0): expected_datao=0, payload_size=4096 00:24:27.982 [2024-07-25 10:13:06.949328] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949335] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949339] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.982 [2024-07-25 10:13:06.949514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.982 [2024-07-25 10:13:06.949518] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949522] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f440) on tqpair=0x130bec0 00:24:27.982 [2024-07-25 10:13:06.949533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:27.982 [2024-07-25 10:13:06.949556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949560] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.949567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.982 [2024-07-25 10:13:06.949574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.949587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:27.982 [2024-07-25 10:13:06.949601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f440, cid 4, qid 0 00:24:27.982 [2024-07-25 10:13:06.949606] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f5c0, cid 5, qid 0 00:24:27.982 [2024-07-25 10:13:06.949923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.982 [2024-07-25 10:13:06.949929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.982 [2024-07-25 10:13:06.949933] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949936] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bec0): datao=0, datal=1024, cccid=4 00:24:27.982 [2024-07-25 10:13:06.949941] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138f440) on tqpair(0x130bec0): expected_datao=0, payload_size=1024 00:24:27.982 [2024-07-25 10:13:06.949945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949951] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949955] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949961] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.982 [2024-07-25 10:13:06.949966] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.982 [2024-07-25 10:13:06.949972] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.949976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f5c0) on tqpair=0x130bec0 00:24:27.982 [2024-07-25 10:13:06.990453] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.982 [2024-07-25 10:13:06.990465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.982 [2024-07-25 10:13:06.990469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.990473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f440) on tqpair=0x130bec0 00:24:27.982 [2024-07-25 10:13:06.990491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.990495] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:06.990503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.982 [2024-07-25 10:13:06.990520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f440, cid 4, qid 0 00:24:27.982 [2024-07-25 10:13:06.990735] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.982 [2024-07-25 10:13:06.990743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.982 [2024-07-25 10:13:06.990746] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.990750] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bec0): datao=0, datal=3072, cccid=4 00:24:27.982 [2024-07-25 10:13:06.990754] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138f440) on tqpair(0x130bec0): expected_datao=0, payload_size=3072 00:24:27.982 [2024-07-25 10:13:06.990758] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.990821] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:06.990825] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:07.034209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.982 [2024-07-25 10:13:07.034222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.982 [2024-07-25 10:13:07.034225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:07.034229] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f440) on tqpair=0x130bec0 00:24:27.982 [2024-07-25 10:13:07.034239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:07.034243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x130bec0) 00:24:27.982 [2024-07-25 10:13:07.034250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.982 [2024-07-25 10:13:07.034266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f440, cid 4, qid 0 00:24:27.982 [2024-07-25 10:13:07.034518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:27.982 [2024-07-25 10:13:07.034525] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:27.982 [2024-07-25 10:13:07.034529] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:07.034532] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x130bec0): datao=0, datal=8, cccid=4 00:24:27.982 [2024-07-25 10:13:07.034537] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x138f440) on tqpair(0x130bec0): expected_datao=0, payload_size=8 00:24:27.982 [2024-07-25 10:13:07.034541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.982 [2024-07-25 10:13:07.034548] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:27.983 [2024-07-25 10:13:07.034551] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:27.983 [2024-07-25 10:13:07.075447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.983 [2024-07-25 10:13:07.075458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.983 [2024-07-25 10:13:07.075462] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.983 [2024-07-25 10:13:07.075469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f440) on tqpair=0x130bec0 00:24:27.983 ===================================================== 00:24:27.983 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:27.983 ===================================================== 00:24:27.983 Controller Capabilities/Features 00:24:27.983 ================================ 00:24:27.983 Vendor ID: 0000 00:24:27.983 Subsystem Vendor ID: 0000 00:24:27.983 Serial Number: .................... 00:24:27.983 Model Number: ........................................ 00:24:27.983 Firmware Version: 24.09 00:24:27.983 Recommended Arb Burst: 0 00:24:27.983 IEEE OUI Identifier: 00 00 00 00:24:27.983 Multi-path I/O 00:24:27.983 May have multiple subsystem ports: No 00:24:27.983 May have multiple controllers: No 00:24:27.983 Associated with SR-IOV VF: No 00:24:27.983 Max Data Transfer Size: 131072 00:24:27.983 Max Number of Namespaces: 0 00:24:27.983 Max Number of I/O Queues: 1024 00:24:27.983 NVMe Specification Version (VS): 1.3 00:24:27.983 NVMe Specification Version (Identify): 1.3 00:24:27.983 Maximum Queue Entries: 128 00:24:27.983 Contiguous Queues Required: Yes 00:24:27.983 Arbitration Mechanisms Supported 00:24:27.983 Weighted Round Robin: Not Supported 00:24:27.983 Vendor Specific: Not Supported 00:24:27.983 Reset Timeout: 15000 ms 00:24:27.983 Doorbell Stride: 4 bytes 00:24:27.983 NVM Subsystem Reset: Not Supported 00:24:27.983 Command Sets Supported 00:24:27.983 NVM Command Set: Supported 00:24:27.983 Boot Partition: Not Supported 00:24:27.983 Memory Page Size Minimum: 4096 bytes 00:24:27.983 Memory Page Size Maximum: 4096 bytes 00:24:27.983 Persistent Memory Region: Not Supported 00:24:27.983 Optional Asynchronous Events Supported 00:24:27.983 Namespace Attribute Notices: Not Supported 00:24:27.983 Firmware Activation Notices: Not Supported 00:24:27.983 ANA Change Notices: Not Supported 00:24:27.983 PLE Aggregate Log Change Notices: Not Supported 00:24:27.983 LBA Status Info Alert Notices: Not Supported 00:24:27.983 EGE Aggregate Log Change Notices: Not Supported 00:24:27.983 Normal NVM Subsystem Shutdown event: Not Supported 00:24:27.983 Zone Descriptor Change Notices: Not Supported 00:24:27.983 Discovery Log Change Notices: Supported 00:24:27.983 Controller Attributes 00:24:27.983 128-bit Host Identifier: Not Supported 00:24:27.983 Non-Operational Permissive Mode: Not Supported 00:24:27.983 NVM Sets: Not Supported 00:24:27.983 Read Recovery Levels: Not Supported 00:24:27.983 Endurance Groups: Not Supported 00:24:27.983 Predictable Latency Mode: Not Supported 00:24:27.983 Traffic Based Keep ALive: Not Supported 00:24:27.983 Namespace Granularity: Not Supported 00:24:27.983 SQ Associations: Not Supported 00:24:27.983 UUID List: Not Supported 00:24:27.983 Multi-Domain Subsystem: Not Supported 00:24:27.983 Fixed Capacity Management: Not Supported 00:24:27.983 Variable Capacity Management: Not Supported 00:24:27.983 Delete Endurance Group: Not Supported 00:24:27.983 Delete NVM Set: Not Supported 00:24:27.983 Extended LBA Formats Supported: Not Supported 00:24:27.983 Flexible Data Placement Supported: Not Supported 00:24:27.983 00:24:27.983 Controller Memory Buffer Support 00:24:27.983 ================================ 00:24:27.983 Supported: No 00:24:27.983 00:24:27.983 Persistent Memory Region Support 00:24:27.983 ================================ 00:24:27.983 Supported: No 00:24:27.983 00:24:27.983 Admin Command Set Attributes 00:24:27.983 ============================ 00:24:27.983 Security Send/Receive: Not Supported 00:24:27.983 Format NVM: Not Supported 00:24:27.983 Firmware Activate/Download: Not Supported 00:24:27.983 Namespace Management: Not Supported 00:24:27.983 Device Self-Test: Not Supported 00:24:27.983 Directives: Not Supported 00:24:27.983 NVMe-MI: Not Supported 00:24:27.983 Virtualization Management: Not Supported 00:24:27.983 Doorbell Buffer Config: Not Supported 00:24:27.983 Get LBA Status Capability: Not Supported 00:24:27.983 Command & Feature Lockdown Capability: Not Supported 00:24:27.983 Abort Command Limit: 1 00:24:27.983 Async Event Request Limit: 4 00:24:27.983 Number of Firmware Slots: N/A 00:24:27.983 Firmware Slot 1 Read-Only: N/A 00:24:27.983 Firmware Activation Without Reset: N/A 00:24:27.983 Multiple Update Detection Support: N/A 00:24:27.983 Firmware Update Granularity: No Information Provided 00:24:27.983 Per-Namespace SMART Log: No 00:24:27.983 Asymmetric Namespace Access Log Page: Not Supported 00:24:27.983 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:27.983 Command Effects Log Page: Not Supported 00:24:27.983 Get Log Page Extended Data: Supported 00:24:27.983 Telemetry Log Pages: Not Supported 00:24:27.983 Persistent Event Log Pages: Not Supported 00:24:27.983 Supported Log Pages Log Page: May Support 00:24:27.983 Commands Supported & Effects Log Page: Not Supported 00:24:27.983 Feature Identifiers & Effects Log Page:May Support 00:24:27.983 NVMe-MI Commands & Effects Log Page: May Support 00:24:27.983 Data Area 4 for Telemetry Log: Not Supported 00:24:27.983 Error Log Page Entries Supported: 128 00:24:27.983 Keep Alive: Not Supported 00:24:27.983 00:24:27.983 NVM Command Set Attributes 00:24:27.983 ========================== 00:24:27.983 Submission Queue Entry Size 00:24:27.983 Max: 1 00:24:27.983 Min: 1 00:24:27.983 Completion Queue Entry Size 00:24:27.983 Max: 1 00:24:27.983 Min: 1 00:24:27.983 Number of Namespaces: 0 00:24:27.983 Compare Command: Not Supported 00:24:27.983 Write Uncorrectable Command: Not Supported 00:24:27.983 Dataset Management Command: Not Supported 00:24:27.983 Write Zeroes Command: Not Supported 00:24:27.983 Set Features Save Field: Not Supported 00:24:27.983 Reservations: Not Supported 00:24:27.983 Timestamp: Not Supported 00:24:27.983 Copy: Not Supported 00:24:27.983 Volatile Write Cache: Not Present 00:24:27.983 Atomic Write Unit (Normal): 1 00:24:27.983 Atomic Write Unit (PFail): 1 00:24:27.983 Atomic Compare & Write Unit: 1 00:24:27.983 Fused Compare & Write: Supported 00:24:27.983 Scatter-Gather List 00:24:27.983 SGL Command Set: Supported 00:24:27.983 SGL Keyed: Supported 00:24:27.983 SGL Bit Bucket Descriptor: Not Supported 00:24:27.983 SGL Metadata Pointer: Not Supported 00:24:27.983 Oversized SGL: Not Supported 00:24:27.983 SGL Metadata Address: Not Supported 00:24:27.983 SGL Offset: Supported 00:24:27.983 Transport SGL Data Block: Not Supported 00:24:27.983 Replay Protected Memory Block: Not Supported 00:24:27.983 00:24:27.983 Firmware Slot Information 00:24:27.983 ========================= 00:24:27.983 Active slot: 0 00:24:27.983 00:24:27.983 00:24:27.983 Error Log 00:24:27.983 ========= 00:24:27.983 00:24:27.983 Active Namespaces 00:24:27.983 ================= 00:24:27.983 Discovery Log Page 00:24:27.983 ================== 00:24:27.983 Generation Counter: 2 00:24:27.983 Number of Records: 2 00:24:27.983 Record Format: 0 00:24:27.983 00:24:27.983 Discovery Log Entry 0 00:24:27.983 ---------------------- 00:24:27.983 Transport Type: 3 (TCP) 00:24:27.983 Address Family: 1 (IPv4) 00:24:27.983 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:27.983 Entry Flags: 00:24:27.983 Duplicate Returned Information: 1 00:24:27.983 Explicit Persistent Connection Support for Discovery: 1 00:24:27.983 Transport Requirements: 00:24:27.983 Secure Channel: Not Required 00:24:27.983 Port ID: 0 (0x0000) 00:24:27.983 Controller ID: 65535 (0xffff) 00:24:27.983 Admin Max SQ Size: 128 00:24:27.983 Transport Service Identifier: 4420 00:24:27.983 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:27.983 Transport Address: 10.0.0.2 00:24:27.983 Discovery Log Entry 1 00:24:27.983 ---------------------- 00:24:27.983 Transport Type: 3 (TCP) 00:24:27.983 Address Family: 1 (IPv4) 00:24:27.983 Subsystem Type: 2 (NVM Subsystem) 00:24:27.983 Entry Flags: 00:24:27.983 Duplicate Returned Information: 0 00:24:27.983 Explicit Persistent Connection Support for Discovery: 0 00:24:27.983 Transport Requirements: 00:24:27.984 Secure Channel: Not Required 00:24:27.984 Port ID: 0 (0x0000) 00:24:27.984 Controller ID: 65535 (0xffff) 00:24:27.984 Admin Max SQ Size: 128 00:24:27.984 Transport Service Identifier: 4420 00:24:27.984 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:27.984 Transport Address: 10.0.0.2 [2024-07-25 10:13:07.075550] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:27.984 [2024-07-25 10:13:07.075560] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138ee40) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.075567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.984 [2024-07-25 10:13:07.075572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138efc0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.075577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.984 [2024-07-25 10:13:07.075582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f140) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.075586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.984 [2024-07-25 10:13:07.075591] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.075596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:27.984 [2024-07-25 10:13:07.075606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.075610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.075614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.075621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.075636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.075802] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.075809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.075813] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.075817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.075824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.075827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.075831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.075838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.075852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.076109] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.076115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.076119] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.076127] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:27.984 [2024-07-25 10:13:07.076132] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:27.984 [2024-07-25 10:13:07.076141] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076149] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.076155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.076169] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.076465] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.076472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.076475] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076479] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.076489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076493] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076496] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.076503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.076515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.076734] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.076740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.076744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.076757] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.076764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.076771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.076781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.077071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.077077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.077080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077084] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.077093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077097] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.077107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.077118] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.077375] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.077382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.077386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077389] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.077399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077403] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.077413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.077427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.077731] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.077738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.077741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.077754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.077762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.077768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.077779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.078013] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.078019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.078023] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.078026] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.078036] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.078040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.078043] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.078050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.078060] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.078288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.984 [2024-07-25 10:13:07.078295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.984 [2024-07-25 10:13:07.078299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.078302] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.984 [2024-07-25 10:13:07.078312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.078316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.984 [2024-07-25 10:13:07.078319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.984 [2024-07-25 10:13:07.078326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.984 [2024-07-25 10:13:07.078337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.984 [2024-07-25 10:13:07.078592] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.985 [2024-07-25 10:13:07.078598] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.985 [2024-07-25 10:13:07.078602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.078605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.985 [2024-07-25 10:13:07.078615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.078619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.078622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.985 [2024-07-25 10:13:07.078629] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.985 [2024-07-25 10:13:07.078639] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.985 [2024-07-25 10:13:07.078897] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.985 [2024-07-25 10:13:07.078904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.985 [2024-07-25 10:13:07.078907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.078911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.985 [2024-07-25 10:13:07.078920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.078924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.078928] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.985 [2024-07-25 10:13:07.078934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.985 [2024-07-25 10:13:07.078945] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.985 [2024-07-25 10:13:07.079170] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.985 [2024-07-25 10:13:07.079176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.985 [2024-07-25 10:13:07.079180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.079184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.985 [2024-07-25 10:13:07.079193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.079197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.083207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x130bec0) 00:24:27.985 [2024-07-25 10:13:07.083216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.985 [2024-07-25 10:13:07.083229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x138f2c0, cid 3, qid 0 00:24:27.985 [2024-07-25 10:13:07.083495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:27.985 [2024-07-25 10:13:07.083502] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:27.985 [2024-07-25 10:13:07.083505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:27.985 [2024-07-25 10:13:07.083509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x138f2c0) on tqpair=0x130bec0 00:24:27.985 [2024-07-25 10:13:07.083517] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:27.985 00:24:27.985 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:28.250 [2024-07-25 10:13:07.122485] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:28.250 [2024-07-25 10:13:07.122528] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1386919 ] 00:24:28.250 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.250 [2024-07-25 10:13:07.153779] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:28.250 [2024-07-25 10:13:07.153820] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:28.250 [2024-07-25 10:13:07.153825] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:28.250 [2024-07-25 10:13:07.153836] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:28.250 [2024-07-25 10:13:07.153843] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:28.250 [2024-07-25 10:13:07.157228] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:28.250 [2024-07-25 10:13:07.157251] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9d8ec0 0 00:24:28.250 [2024-07-25 10:13:07.165210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:28.250 [2024-07-25 10:13:07.165226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:28.250 [2024-07-25 10:13:07.165230] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:28.250 [2024-07-25 10:13:07.165233] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:28.250 [2024-07-25 10:13:07.165269] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.165274] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.165278] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.250 [2024-07-25 10:13:07.165289] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:28.250 [2024-07-25 10:13:07.165305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.250 [2024-07-25 10:13:07.173211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.250 [2024-07-25 10:13:07.173220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.250 [2024-07-25 10:13:07.173224] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173228] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.250 [2024-07-25 10:13:07.173236] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:28.250 [2024-07-25 10:13:07.173241] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:28.250 [2024-07-25 10:13:07.173246] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:28.250 [2024-07-25 10:13:07.173258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173262] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.250 [2024-07-25 10:13:07.173273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.250 [2024-07-25 10:13:07.173285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.250 [2024-07-25 10:13:07.173486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.250 [2024-07-25 10:13:07.173495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.250 [2024-07-25 10:13:07.173498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173502] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.250 [2024-07-25 10:13:07.173510] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:28.250 [2024-07-25 10:13:07.173517] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:28.250 [2024-07-25 10:13:07.173524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173528] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.250 [2024-07-25 10:13:07.173539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.250 [2024-07-25 10:13:07.173550] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.250 [2024-07-25 10:13:07.173787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.250 [2024-07-25 10:13:07.173796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.250 [2024-07-25 10:13:07.173800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.250 [2024-07-25 10:13:07.173804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.250 [2024-07-25 10:13:07.173809] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:28.250 [2024-07-25 10:13:07.173816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:28.250 [2024-07-25 10:13:07.173823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.173827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.173830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.173837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.251 [2024-07-25 10:13:07.173848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.251 [2024-07-25 10:13:07.174169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.251 [2024-07-25 10:13:07.174175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.251 [2024-07-25 10:13:07.174179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174182] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.251 [2024-07-25 10:13:07.174187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:28.251 [2024-07-25 10:13:07.174196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174207] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174211] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.174217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.251 [2024-07-25 10:13:07.174228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.251 [2024-07-25 10:13:07.174444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.251 [2024-07-25 10:13:07.174451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.251 [2024-07-25 10:13:07.174454] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.251 [2024-07-25 10:13:07.174462] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:28.251 [2024-07-25 10:13:07.174467] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:28.251 [2024-07-25 10:13:07.174474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:28.251 [2024-07-25 10:13:07.174579] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:28.251 [2024-07-25 10:13:07.174583] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:28.251 [2024-07-25 10:13:07.174590] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.174604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.251 [2024-07-25 10:13:07.174618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.251 [2024-07-25 10:13:07.174846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.251 [2024-07-25 10:13:07.174853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.251 [2024-07-25 10:13:07.174856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.251 [2024-07-25 10:13:07.174864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:28.251 [2024-07-25 10:13:07.174873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174877] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.174881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.174887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.251 [2024-07-25 10:13:07.174897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.251 [2024-07-25 10:13:07.175133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.251 [2024-07-25 10:13:07.175139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.251 [2024-07-25 10:13:07.175143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.175146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.251 [2024-07-25 10:13:07.175150] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:28.251 [2024-07-25 10:13:07.175155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:28.251 [2024-07-25 10:13:07.175163] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:28.251 [2024-07-25 10:13:07.175171] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:28.251 [2024-07-25 10:13:07.175180] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.175184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.175191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.251 [2024-07-25 10:13:07.175208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.251 [2024-07-25 10:13:07.175513] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.251 [2024-07-25 10:13:07.175520] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.251 [2024-07-25 10:13:07.175523] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.175527] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=4096, cccid=0 00:24:28.251 [2024-07-25 10:13:07.175532] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5be40) on tqpair(0x9d8ec0): expected_datao=0, payload_size=4096 00:24:28.251 [2024-07-25 10:13:07.175536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.175543] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.175547] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.251 [2024-07-25 10:13:07.216529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.251 [2024-07-25 10:13:07.216532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.251 [2024-07-25 10:13:07.216547] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:28.251 [2024-07-25 10:13:07.216552] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:28.251 [2024-07-25 10:13:07.216556] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:28.251 [2024-07-25 10:13:07.216560] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:28.251 [2024-07-25 10:13:07.216565] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:28.251 [2024-07-25 10:13:07.216569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:28.251 [2024-07-25 10:13:07.216578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:28.251 [2024-07-25 10:13:07.216588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.216603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.251 [2024-07-25 10:13:07.216616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.251 [2024-07-25 10:13:07.216772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.251 [2024-07-25 10:13:07.216779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.251 [2024-07-25 10:13:07.216782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.251 [2024-07-25 10:13:07.216792] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216796] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.251 [2024-07-25 10:13:07.216800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9d8ec0) 00:24:28.251 [2024-07-25 10:13:07.216806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.252 [2024-07-25 10:13:07.216812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.216825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.252 [2024-07-25 10:13:07.216831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216835] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.216844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.252 [2024-07-25 10:13:07.216850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216857] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.216862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.252 [2024-07-25 10:13:07.216867] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.216881] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.216887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.216891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.216897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.252 [2024-07-25 10:13:07.216911] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5be40, cid 0, qid 0 00:24:28.252 [2024-07-25 10:13:07.216916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5bfc0, cid 1, qid 0 00:24:28.252 [2024-07-25 10:13:07.216920] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c140, cid 2, qid 0 00:24:28.252 [2024-07-25 10:13:07.216925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.252 [2024-07-25 10:13:07.216930] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.252 [2024-07-25 10:13:07.217191] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.252 [2024-07-25 10:13:07.217197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.252 [2024-07-25 10:13:07.221207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221212] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.252 [2024-07-25 10:13:07.221217] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:28.252 [2024-07-25 10:13:07.221222] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.221233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.221239] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.221246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.221260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:28.252 [2024-07-25 10:13:07.221273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.252 [2024-07-25 10:13:07.221510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.252 [2024-07-25 10:13:07.221517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.252 [2024-07-25 10:13:07.221521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221524] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.252 [2024-07-25 10:13:07.221590] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.221600] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.221607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.221617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.252 [2024-07-25 10:13:07.221629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.252 [2024-07-25 10:13:07.221875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.252 [2024-07-25 10:13:07.221882] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.252 [2024-07-25 10:13:07.221886] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221889] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=4096, cccid=4 00:24:28.252 [2024-07-25 10:13:07.221894] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c440) on tqpair(0x9d8ec0): expected_datao=0, payload_size=4096 00:24:28.252 [2024-07-25 10:13:07.221898] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221905] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.221909] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.252 [2024-07-25 10:13:07.222065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.252 [2024-07-25 10:13:07.222069] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222072] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.252 [2024-07-25 10:13:07.222081] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:28.252 [2024-07-25 10:13:07.222092] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.222102] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.222109] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.222119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.252 [2024-07-25 10:13:07.222131] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.252 [2024-07-25 10:13:07.222351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.252 [2024-07-25 10:13:07.222359] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.252 [2024-07-25 10:13:07.222362] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222366] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=4096, cccid=4 00:24:28.252 [2024-07-25 10:13:07.222370] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c440) on tqpair(0x9d8ec0): expected_datao=0, payload_size=4096 00:24:28.252 [2024-07-25 10:13:07.222374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222381] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222384] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.252 [2024-07-25 10:13:07.222568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.252 [2024-07-25 10:13:07.222571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.252 [2024-07-25 10:13:07.222588] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.222597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:28.252 [2024-07-25 10:13:07.222604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.252 [2024-07-25 10:13:07.222617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.252 [2024-07-25 10:13:07.222629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.252 [2024-07-25 10:13:07.222979] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.252 [2024-07-25 10:13:07.222985] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.252 [2024-07-25 10:13:07.222989] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.252 [2024-07-25 10:13:07.222992] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=4096, cccid=4 00:24:28.253 [2024-07-25 10:13:07.222996] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c440) on tqpair(0x9d8ec0): expected_datao=0, payload_size=4096 00:24:28.253 [2024-07-25 10:13:07.223000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223007] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223011] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.253 [2024-07-25 10:13:07.223165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.253 [2024-07-25 10:13:07.223169] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223173] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.253 [2024-07-25 10:13:07.223179] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223187] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223228] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223233] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223243] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:28.253 [2024-07-25 10:13:07.223247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:28.253 [2024-07-25 10:13:07.223252] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:28.253 [2024-07-25 10:13:07.223267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223271] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.223277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.223284] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223288] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.223297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:28.253 [2024-07-25 10:13:07.223312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.253 [2024-07-25 10:13:07.223319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c5c0, cid 5, qid 0 00:24:28.253 [2024-07-25 10:13:07.223561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.253 [2024-07-25 10:13:07.223568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.253 [2024-07-25 10:13:07.223571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223575] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.253 [2024-07-25 10:13:07.223582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.253 [2024-07-25 10:13:07.223588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.253 [2024-07-25 10:13:07.223591] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223595] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c5c0) on tqpair=0x9d8ec0 00:24:28.253 [2024-07-25 10:13:07.223604] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223608] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.223614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.223625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c5c0, cid 5, qid 0 00:24:28.253 [2024-07-25 10:13:07.223860] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.253 [2024-07-25 10:13:07.223867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.253 [2024-07-25 10:13:07.223870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c5c0) on tqpair=0x9d8ec0 00:24:28.253 [2024-07-25 10:13:07.223883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.223886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.223893] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.223903] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c5c0, cid 5, qid 0 00:24:28.253 [2024-07-25 10:13:07.224095] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.253 [2024-07-25 10:13:07.224102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.253 [2024-07-25 10:13:07.224105] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c5c0) on tqpair=0x9d8ec0 00:24:28.253 [2024-07-25 10:13:07.224118] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.224128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.224138] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c5c0, cid 5, qid 0 00:24:28.253 [2024-07-25 10:13:07.224351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.253 [2024-07-25 10:13:07.224358] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.253 [2024-07-25 10:13:07.224361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c5c0) on tqpair=0x9d8ec0 00:24:28.253 [2024-07-25 10:13:07.224380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224384] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.224391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.224400] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.224410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.224417] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.224427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.224434] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9d8ec0) 00:24:28.253 [2024-07-25 10:13:07.224444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.253 [2024-07-25 10:13:07.224456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c5c0, cid 5, qid 0 00:24:28.253 [2024-07-25 10:13:07.224461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c440, cid 4, qid 0 00:24:28.253 [2024-07-25 10:13:07.224466] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c740, cid 6, qid 0 00:24:28.253 [2024-07-25 10:13:07.224471] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c8c0, cid 7, qid 0 00:24:28.253 [2024-07-25 10:13:07.224732] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.253 [2024-07-25 10:13:07.224739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.253 [2024-07-25 10:13:07.224743] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.224746] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=8192, cccid=5 00:24:28.253 [2024-07-25 10:13:07.224751] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c5c0) on tqpair(0x9d8ec0): expected_datao=0, payload_size=8192 00:24:28.253 [2024-07-25 10:13:07.224755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225038] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225042] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.253 [2024-07-25 10:13:07.225053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.253 [2024-07-25 10:13:07.225056] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225060] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=512, cccid=4 00:24:28.253 [2024-07-25 10:13:07.225064] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c440) on tqpair(0x9d8ec0): expected_datao=0, payload_size=512 00:24:28.253 [2024-07-25 10:13:07.225068] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225074] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225078] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.253 [2024-07-25 10:13:07.225089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.253 [2024-07-25 10:13:07.225092] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.253 [2024-07-25 10:13:07.225096] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=512, cccid=6 00:24:28.253 [2024-07-25 10:13:07.225100] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c740) on tqpair(0x9d8ec0): expected_datao=0, payload_size=512 00:24:28.254 [2024-07-25 10:13:07.225104] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.225113] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.225116] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.225122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:28.254 [2024-07-25 10:13:07.225128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:28.254 [2024-07-25 10:13:07.225131] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.225134] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9d8ec0): datao=0, datal=4096, cccid=7 00:24:28.254 [2024-07-25 10:13:07.225138] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa5c8c0) on tqpair(0x9d8ec0): expected_datao=0, payload_size=4096 00:24:28.254 [2024-07-25 10:13:07.225143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.225149] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.225153] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.229209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.254 [2024-07-25 10:13:07.229217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.254 [2024-07-25 10:13:07.229220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.229224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c5c0) on tqpair=0x9d8ec0 00:24:28.254 [2024-07-25 10:13:07.229237] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.254 [2024-07-25 10:13:07.229243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.254 [2024-07-25 10:13:07.229247] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.229250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c440) on tqpair=0x9d8ec0 00:24:28.254 [2024-07-25 10:13:07.229260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.254 [2024-07-25 10:13:07.229266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.254 [2024-07-25 10:13:07.229269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.229273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c740) on tqpair=0x9d8ec0 00:24:28.254 [2024-07-25 10:13:07.229280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.254 [2024-07-25 10:13:07.229286] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.254 [2024-07-25 10:13:07.229289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.254 [2024-07-25 10:13:07.229293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c8c0) on tqpair=0x9d8ec0 00:24:28.254 ===================================================== 00:24:28.254 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:28.254 ===================================================== 00:24:28.254 Controller Capabilities/Features 00:24:28.254 ================================ 00:24:28.254 Vendor ID: 8086 00:24:28.254 Subsystem Vendor ID: 8086 00:24:28.254 Serial Number: SPDK00000000000001 00:24:28.254 Model Number: SPDK bdev Controller 00:24:28.254 Firmware Version: 24.09 00:24:28.254 Recommended Arb Burst: 6 00:24:28.254 IEEE OUI Identifier: e4 d2 5c 00:24:28.254 Multi-path I/O 00:24:28.254 May have multiple subsystem ports: Yes 00:24:28.254 May have multiple controllers: Yes 00:24:28.254 Associated with SR-IOV VF: No 00:24:28.254 Max Data Transfer Size: 131072 00:24:28.254 Max Number of Namespaces: 32 00:24:28.254 Max Number of I/O Queues: 127 00:24:28.254 NVMe Specification Version (VS): 1.3 00:24:28.254 NVMe Specification Version (Identify): 1.3 00:24:28.254 Maximum Queue Entries: 128 00:24:28.254 Contiguous Queues Required: Yes 00:24:28.254 Arbitration Mechanisms Supported 00:24:28.254 Weighted Round Robin: Not Supported 00:24:28.254 Vendor Specific: Not Supported 00:24:28.254 Reset Timeout: 15000 ms 00:24:28.254 Doorbell Stride: 4 bytes 00:24:28.254 NVM Subsystem Reset: Not Supported 00:24:28.254 Command Sets Supported 00:24:28.254 NVM Command Set: Supported 00:24:28.254 Boot Partition: Not Supported 00:24:28.254 Memory Page Size Minimum: 4096 bytes 00:24:28.254 Memory Page Size Maximum: 4096 bytes 00:24:28.254 Persistent Memory Region: Not Supported 00:24:28.254 Optional Asynchronous Events Supported 00:24:28.254 Namespace Attribute Notices: Supported 00:24:28.254 Firmware Activation Notices: Not Supported 00:24:28.254 ANA Change Notices: Not Supported 00:24:28.254 PLE Aggregate Log Change Notices: Not Supported 00:24:28.254 LBA Status Info Alert Notices: Not Supported 00:24:28.254 EGE Aggregate Log Change Notices: Not Supported 00:24:28.254 Normal NVM Subsystem Shutdown event: Not Supported 00:24:28.254 Zone Descriptor Change Notices: Not Supported 00:24:28.254 Discovery Log Change Notices: Not Supported 00:24:28.254 Controller Attributes 00:24:28.254 128-bit Host Identifier: Supported 00:24:28.254 Non-Operational Permissive Mode: Not Supported 00:24:28.254 NVM Sets: Not Supported 00:24:28.254 Read Recovery Levels: Not Supported 00:24:28.254 Endurance Groups: Not Supported 00:24:28.254 Predictable Latency Mode: Not Supported 00:24:28.254 Traffic Based Keep ALive: Not Supported 00:24:28.254 Namespace Granularity: Not Supported 00:24:28.254 SQ Associations: Not Supported 00:24:28.254 UUID List: Not Supported 00:24:28.254 Multi-Domain Subsystem: Not Supported 00:24:28.254 Fixed Capacity Management: Not Supported 00:24:28.254 Variable Capacity Management: Not Supported 00:24:28.254 Delete Endurance Group: Not Supported 00:24:28.254 Delete NVM Set: Not Supported 00:24:28.254 Extended LBA Formats Supported: Not Supported 00:24:28.254 Flexible Data Placement Supported: Not Supported 00:24:28.254 00:24:28.254 Controller Memory Buffer Support 00:24:28.254 ================================ 00:24:28.254 Supported: No 00:24:28.254 00:24:28.254 Persistent Memory Region Support 00:24:28.254 ================================ 00:24:28.254 Supported: No 00:24:28.254 00:24:28.254 Admin Command Set Attributes 00:24:28.254 ============================ 00:24:28.254 Security Send/Receive: Not Supported 00:24:28.254 Format NVM: Not Supported 00:24:28.254 Firmware Activate/Download: Not Supported 00:24:28.254 Namespace Management: Not Supported 00:24:28.254 Device Self-Test: Not Supported 00:24:28.254 Directives: Not Supported 00:24:28.254 NVMe-MI: Not Supported 00:24:28.254 Virtualization Management: Not Supported 00:24:28.254 Doorbell Buffer Config: Not Supported 00:24:28.254 Get LBA Status Capability: Not Supported 00:24:28.254 Command & Feature Lockdown Capability: Not Supported 00:24:28.254 Abort Command Limit: 4 00:24:28.254 Async Event Request Limit: 4 00:24:28.254 Number of Firmware Slots: N/A 00:24:28.254 Firmware Slot 1 Read-Only: N/A 00:24:28.254 Firmware Activation Without Reset: N/A 00:24:28.254 Multiple Update Detection Support: N/A 00:24:28.254 Firmware Update Granularity: No Information Provided 00:24:28.254 Per-Namespace SMART Log: No 00:24:28.254 Asymmetric Namespace Access Log Page: Not Supported 00:24:28.254 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:28.254 Command Effects Log Page: Supported 00:24:28.254 Get Log Page Extended Data: Supported 00:24:28.254 Telemetry Log Pages: Not Supported 00:24:28.254 Persistent Event Log Pages: Not Supported 00:24:28.254 Supported Log Pages Log Page: May Support 00:24:28.254 Commands Supported & Effects Log Page: Not Supported 00:24:28.254 Feature Identifiers & Effects Log Page:May Support 00:24:28.254 NVMe-MI Commands & Effects Log Page: May Support 00:24:28.254 Data Area 4 for Telemetry Log: Not Supported 00:24:28.254 Error Log Page Entries Supported: 128 00:24:28.254 Keep Alive: Supported 00:24:28.254 Keep Alive Granularity: 10000 ms 00:24:28.254 00:24:28.254 NVM Command Set Attributes 00:24:28.254 ========================== 00:24:28.254 Submission Queue Entry Size 00:24:28.254 Max: 64 00:24:28.254 Min: 64 00:24:28.254 Completion Queue Entry Size 00:24:28.254 Max: 16 00:24:28.254 Min: 16 00:24:28.254 Number of Namespaces: 32 00:24:28.254 Compare Command: Supported 00:24:28.254 Write Uncorrectable Command: Not Supported 00:24:28.254 Dataset Management Command: Supported 00:24:28.254 Write Zeroes Command: Supported 00:24:28.254 Set Features Save Field: Not Supported 00:24:28.254 Reservations: Supported 00:24:28.254 Timestamp: Not Supported 00:24:28.254 Copy: Supported 00:24:28.254 Volatile Write Cache: Present 00:24:28.254 Atomic Write Unit (Normal): 1 00:24:28.254 Atomic Write Unit (PFail): 1 00:24:28.254 Atomic Compare & Write Unit: 1 00:24:28.254 Fused Compare & Write: Supported 00:24:28.254 Scatter-Gather List 00:24:28.254 SGL Command Set: Supported 00:24:28.254 SGL Keyed: Supported 00:24:28.254 SGL Bit Bucket Descriptor: Not Supported 00:24:28.254 SGL Metadata Pointer: Not Supported 00:24:28.254 Oversized SGL: Not Supported 00:24:28.254 SGL Metadata Address: Not Supported 00:24:28.254 SGL Offset: Supported 00:24:28.254 Transport SGL Data Block: Not Supported 00:24:28.255 Replay Protected Memory Block: Not Supported 00:24:28.255 00:24:28.255 Firmware Slot Information 00:24:28.255 ========================= 00:24:28.255 Active slot: 1 00:24:28.255 Slot 1 Firmware Revision: 24.09 00:24:28.255 00:24:28.255 00:24:28.255 Commands Supported and Effects 00:24:28.255 ============================== 00:24:28.255 Admin Commands 00:24:28.255 -------------- 00:24:28.255 Get Log Page (02h): Supported 00:24:28.255 Identify (06h): Supported 00:24:28.255 Abort (08h): Supported 00:24:28.255 Set Features (09h): Supported 00:24:28.255 Get Features (0Ah): Supported 00:24:28.255 Asynchronous Event Request (0Ch): Supported 00:24:28.255 Keep Alive (18h): Supported 00:24:28.255 I/O Commands 00:24:28.255 ------------ 00:24:28.255 Flush (00h): Supported LBA-Change 00:24:28.255 Write (01h): Supported LBA-Change 00:24:28.255 Read (02h): Supported 00:24:28.255 Compare (05h): Supported 00:24:28.255 Write Zeroes (08h): Supported LBA-Change 00:24:28.255 Dataset Management (09h): Supported LBA-Change 00:24:28.255 Copy (19h): Supported LBA-Change 00:24:28.255 00:24:28.255 Error Log 00:24:28.255 ========= 00:24:28.255 00:24:28.255 Arbitration 00:24:28.255 =========== 00:24:28.255 Arbitration Burst: 1 00:24:28.255 00:24:28.255 Power Management 00:24:28.255 ================ 00:24:28.255 Number of Power States: 1 00:24:28.255 Current Power State: Power State #0 00:24:28.255 Power State #0: 00:24:28.255 Max Power: 0.00 W 00:24:28.255 Non-Operational State: Operational 00:24:28.255 Entry Latency: Not Reported 00:24:28.255 Exit Latency: Not Reported 00:24:28.255 Relative Read Throughput: 0 00:24:28.255 Relative Read Latency: 0 00:24:28.255 Relative Write Throughput: 0 00:24:28.255 Relative Write Latency: 0 00:24:28.255 Idle Power: Not Reported 00:24:28.255 Active Power: Not Reported 00:24:28.255 Non-Operational Permissive Mode: Not Supported 00:24:28.255 00:24:28.255 Health Information 00:24:28.255 ================== 00:24:28.255 Critical Warnings: 00:24:28.255 Available Spare Space: OK 00:24:28.255 Temperature: OK 00:24:28.255 Device Reliability: OK 00:24:28.255 Read Only: No 00:24:28.255 Volatile Memory Backup: OK 00:24:28.255 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:28.255 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:28.255 Available Spare: 0% 00:24:28.255 Available Spare Threshold: 0% 00:24:28.255 Life Percentage Used:[2024-07-25 10:13:07.229395] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.229400] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9d8ec0) 00:24:28.255 [2024-07-25 10:13:07.229407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.255 [2024-07-25 10:13:07.229421] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c8c0, cid 7, qid 0 00:24:28.255 [2024-07-25 10:13:07.229688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.255 [2024-07-25 10:13:07.229695] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.255 [2024-07-25 10:13:07.229698] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.229702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c8c0) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.229734] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:28.255 [2024-07-25 10:13:07.229743] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5be40) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.229749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.255 [2024-07-25 10:13:07.229754] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5bfc0) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.229762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.255 [2024-07-25 10:13:07.229767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c140) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.229771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.255 [2024-07-25 10:13:07.229776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.229781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.255 [2024-07-25 10:13:07.229788] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.229792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.229796] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.255 [2024-07-25 10:13:07.229803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.255 [2024-07-25 10:13:07.229816] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.255 [2024-07-25 10:13:07.230096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.255 [2024-07-25 10:13:07.230102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.255 [2024-07-25 10:13:07.230106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.230110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.230116] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.230120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.230124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.255 [2024-07-25 10:13:07.230130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.255 [2024-07-25 10:13:07.230144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.255 [2024-07-25 10:13:07.230402] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.255 [2024-07-25 10:13:07.230410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.255 [2024-07-25 10:13:07.230413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.230417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.255 [2024-07-25 10:13:07.230421] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:28.255 [2024-07-25 10:13:07.230426] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:28.255 [2024-07-25 10:13:07.230435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.230439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.255 [2024-07-25 10:13:07.230442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.255 [2024-07-25 10:13:07.230449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.255 [2024-07-25 10:13:07.230461] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.255 [2024-07-25 10:13:07.230667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.255 [2024-07-25 10:13:07.230673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.255 [2024-07-25 10:13:07.230676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.230680] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.230690] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.230696] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.230700] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.230706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.230717] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.230958] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.230965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.230968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.230972] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.230981] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.230985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.230989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.230995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.231006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.231219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.231226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.231230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.231244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231251] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.231258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.231268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.231517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.231523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.231526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.231539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.231553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.231563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.231799] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.231806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.231809] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.231822] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.231831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.231838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.231848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.232120] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.232126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.232130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232133] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.232143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232147] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232150] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.232157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.232167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.232425] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.232432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.232436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.232449] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232453] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.232463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.232474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.232730] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.232737] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.232740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.232753] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232760] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.232767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.232777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.232985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.232991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.232994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.232998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.233007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.233011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.233014] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.233023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.233034] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.237210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.237219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.237223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.237227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.237236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.237240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.237244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9d8ec0) 00:24:28.256 [2024-07-25 10:13:07.237250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.256 [2024-07-25 10:13:07.237262] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa5c2c0, cid 3, qid 0 00:24:28.256 [2024-07-25 10:13:07.237492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:28.256 [2024-07-25 10:13:07.237498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:28.256 [2024-07-25 10:13:07.237502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:28.256 [2024-07-25 10:13:07.237506] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xa5c2c0) on tqpair=0x9d8ec0 00:24:28.256 [2024-07-25 10:13:07.237513] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:24:28.256 0% 00:24:28.256 Data Units Read: 0 00:24:28.256 Data Units Written: 0 00:24:28.256 Host Read Commands: 0 00:24:28.256 Host Write Commands: 0 00:24:28.256 Controller Busy Time: 0 minutes 00:24:28.256 Power Cycles: 0 00:24:28.256 Power On Hours: 0 hours 00:24:28.256 Unsafe Shutdowns: 0 00:24:28.256 Unrecoverable Media Errors: 0 00:24:28.256 Lifetime Error Log Entries: 0 00:24:28.256 Warning Temperature Time: 0 minutes 00:24:28.256 Critical Temperature Time: 0 minutes 00:24:28.256 00:24:28.256 Number of Queues 00:24:28.256 ================ 00:24:28.256 Number of I/O Submission Queues: 127 00:24:28.256 Number of I/O Completion Queues: 127 00:24:28.256 00:24:28.256 Active Namespaces 00:24:28.256 ================= 00:24:28.256 Namespace ID:1 00:24:28.256 Error Recovery Timeout: Unlimited 00:24:28.256 Command Set Identifier: NVM (00h) 00:24:28.256 Deallocate: Supported 00:24:28.256 Deallocated/Unwritten Error: Not Supported 00:24:28.256 Deallocated Read Value: Unknown 00:24:28.256 Deallocate in Write Zeroes: Not Supported 00:24:28.256 Deallocated Guard Field: 0xFFFF 00:24:28.257 Flush: Supported 00:24:28.257 Reservation: Supported 00:24:28.257 Namespace Sharing Capabilities: Multiple Controllers 00:24:28.257 Size (in LBAs): 131072 (0GiB) 00:24:28.257 Capacity (in LBAs): 131072 (0GiB) 00:24:28.257 Utilization (in LBAs): 131072 (0GiB) 00:24:28.257 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:28.257 EUI64: ABCDEF0123456789 00:24:28.257 UUID: 19449a3d-466c-4c8e-b502-b8ee4b410b6b 00:24:28.257 Thin Provisioning: Not Supported 00:24:28.257 Per-NS Atomic Units: Yes 00:24:28.257 Atomic Boundary Size (Normal): 0 00:24:28.257 Atomic Boundary Size (PFail): 0 00:24:28.257 Atomic Boundary Offset: 0 00:24:28.257 Maximum Single Source Range Length: 65535 00:24:28.257 Maximum Copy Length: 65535 00:24:28.257 Maximum Source Range Count: 1 00:24:28.257 NGUID/EUI64 Never Reused: No 00:24:28.257 Namespace Write Protected: No 00:24:28.257 Number of LBA Formats: 1 00:24:28.257 Current LBA Format: LBA Format #00 00:24:28.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:28.257 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.257 rmmod nvme_tcp 00:24:28.257 rmmod nvme_fabrics 00:24:28.257 rmmod nvme_keyring 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1386565 ']' 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1386565 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1386565 ']' 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1386565 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.257 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1386565 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1386565' 00:24:28.519 killing process with pid 1386565 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1386565 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1386565 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.519 10:13:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:31.068 00:24:31.068 real 0m11.155s 00:24:31.068 user 0m8.088s 00:24:31.068 sys 0m5.812s 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:31.068 ************************************ 00:24:31.068 END TEST nvmf_identify 00:24:31.068 ************************************ 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.068 ************************************ 00:24:31.068 START TEST nvmf_perf 00:24:31.068 ************************************ 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:31.068 * Looking for test storage... 00:24:31.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.068 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.069 10:13:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.703 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:37.704 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:37.704 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:37.704 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:37.704 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:37.704 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:37.966 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.966 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:24:37.966 00:24:37.966 --- 10.0.0.2 ping statistics --- 00:24:37.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.966 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.966 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.966 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:24:37.966 00:24:37.966 --- 10.0.0.1 ping statistics --- 00:24:37.966 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.966 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1391003 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1391003 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1391003 ']' 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.966 10:13:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:37.966 [2024-07-25 10:13:16.998362] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:37.966 [2024-07-25 10:13:16.998425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.966 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.966 [2024-07-25 10:13:17.072147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.227 [2024-07-25 10:13:17.148265] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.227 [2024-07-25 10:13:17.148302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.227 [2024-07-25 10:13:17.148309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.227 [2024-07-25 10:13:17.148316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.227 [2024-07-25 10:13:17.148322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.228 [2024-07-25 10:13:17.148491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.228 [2024-07-25 10:13:17.148627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.228 [2024-07-25 10:13:17.148783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.228 [2024-07-25 10:13:17.148784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.799 10:13:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:39.371 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:39.371 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:39.371 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:39.371 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:39.633 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:39.633 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:39.633 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:39.633 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:39.633 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:39.894 [2024-07-25 10:13:18.789481] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.894 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:39.894 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:39.894 10:13:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:40.155 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:40.155 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:40.416 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.416 [2024-07-25 10:13:19.451904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.416 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:40.677 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:40.677 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:40.677 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:40.677 10:13:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:42.065 Initializing NVMe Controllers 00:24:42.065 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:42.065 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:42.065 Initialization complete. Launching workers. 00:24:42.065 ======================================================== 00:24:42.065 Latency(us) 00:24:42.065 Device Information : IOPS MiB/s Average min max 00:24:42.065 PCIE (0000:65:00.0) NSID 1 from core 0: 79625.36 311.04 401.49 13.29 5333.03 00:24:42.065 ======================================================== 00:24:42.065 Total : 79625.36 311.04 401.49 13.29 5333.03 00:24:42.065 00:24:42.065 10:13:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:42.065 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.009 Initializing NVMe Controllers 00:24:43.009 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.009 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.009 Initialization complete. Launching workers. 00:24:43.009 ======================================================== 00:24:43.009 Latency(us) 00:24:43.009 Device Information : IOPS MiB/s Average min max 00:24:43.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 73.00 0.29 13762.13 501.68 45456.57 00:24:43.009 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24502.81 7963.56 48340.71 00:24:43.009 ======================================================== 00:24:43.009 Total : 114.00 0.45 17625.00 501.68 48340.71 00:24:43.009 00:24:43.009 10:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.270 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.657 Initializing NVMe Controllers 00:24:44.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:44.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:44.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:44.657 Initialization complete. Launching workers. 00:24:44.657 ======================================================== 00:24:44.657 Latency(us) 00:24:44.657 Device Information : IOPS MiB/s Average min max 00:24:44.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8124.98 31.74 3951.65 759.79 8319.32 00:24:44.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3702.99 14.46 8693.46 6134.64 16201.68 00:24:44.657 ======================================================== 00:24:44.657 Total : 11827.98 46.20 5436.17 759.79 16201.68 00:24:44.657 00:24:44.657 10:13:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:44.657 10:13:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:44.657 10:13:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:44.657 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.204 Initializing NVMe Controllers 00:24:47.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.204 Controller IO queue size 128, less than required. 00:24:47.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.204 Controller IO queue size 128, less than required. 00:24:47.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.204 Initialization complete. Launching workers. 00:24:47.204 ======================================================== 00:24:47.204 Latency(us) 00:24:47.204 Device Information : IOPS MiB/s Average min max 00:24:47.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 854.00 213.50 156741.43 85934.82 230015.16 00:24:47.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.00 149.50 220200.27 70457.80 375490.77 00:24:47.204 ======================================================== 00:24:47.204 Total : 1451.99 363.00 182876.69 70457.80 375490.77 00:24:47.204 00:24:47.204 10:13:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:47.204 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.204 No valid NVMe controllers or AIO or URING devices found 00:24:47.204 Initializing NVMe Controllers 00:24:47.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.204 Controller IO queue size 128, less than required. 00:24:47.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.204 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:47.204 Controller IO queue size 128, less than required. 00:24:47.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:47.204 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:47.204 WARNING: Some requested NVMe devices were skipped 00:24:47.204 10:13:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:47.204 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.748 Initializing NVMe Controllers 00:24:49.749 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:49.749 Controller IO queue size 128, less than required. 00:24:49.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.749 Controller IO queue size 128, less than required. 00:24:49.749 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:49.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:49.749 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:49.749 Initialization complete. Launching workers. 00:24:49.749 00:24:49.749 ==================== 00:24:49.749 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:49.749 TCP transport: 00:24:49.749 polls: 48761 00:24:49.749 idle_polls: 14913 00:24:49.749 sock_completions: 33848 00:24:49.749 nvme_completions: 3629 00:24:49.749 submitted_requests: 5356 00:24:49.749 queued_requests: 1 00:24:49.749 00:24:49.749 ==================== 00:24:49.749 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:49.749 TCP transport: 00:24:49.749 polls: 52038 00:24:49.749 idle_polls: 16467 00:24:49.749 sock_completions: 35571 00:24:49.749 nvme_completions: 3429 00:24:49.749 submitted_requests: 4984 00:24:49.749 queued_requests: 1 00:24:49.749 ======================================================== 00:24:49.749 Latency(us) 00:24:49.749 Device Information : IOPS MiB/s Average min max 00:24:49.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 906.99 226.75 145317.63 93074.55 255152.95 00:24:49.749 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 856.99 214.25 154888.34 69380.09 258848.48 00:24:49.749 ======================================================== 00:24:49.749 Total : 1763.98 440.99 149967.34 69380.09 258848.48 00:24:49.749 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.749 rmmod nvme_tcp 00:24:49.749 rmmod nvme_fabrics 00:24:49.749 rmmod nvme_keyring 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1391003 ']' 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1391003 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1391003 ']' 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1391003 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:49.749 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1391003 00:24:50.008 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:50.008 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:50.008 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1391003' 00:24:50.008 killing process with pid 1391003 00:24:50.008 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1391003 00:24:50.008 10:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1391003 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.921 10:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.464 10:13:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.464 00:24:54.464 real 0m23.287s 00:24:54.464 user 0m56.608s 00:24:54.464 sys 0m7.569s 00:24:54.464 10:13:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:54.464 10:13:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:54.464 ************************************ 00:24:54.464 END TEST nvmf_perf 00:24:54.464 ************************************ 00:24:54.464 10:13:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:54.464 10:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:54.464 10:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.465 ************************************ 00:24:54.465 START TEST nvmf_fio_host 00:24:54.465 ************************************ 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:54.465 * Looking for test storage... 00:24:54.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.465 10:13:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:01.077 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:01.077 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:01.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:01.077 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:01.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:01.078 10:13:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:01.078 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:01.078 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.078 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:01.078 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.078 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:01.338 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.338 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:25:01.338 00:25:01.338 --- 10.0.0.2 ping statistics --- 00:25:01.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.338 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.338 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.338 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:25:01.338 00:25:01.338 --- 10.0.0.1 ping statistics --- 00:25:01.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.338 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1397959 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1397959 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1397959 ']' 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.338 10:13:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.338 [2024-07-25 10:13:40.366672] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:01.339 [2024-07-25 10:13:40.366738] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.339 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.339 [2024-07-25 10:13:40.437998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.600 [2024-07-25 10:13:40.513700] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.600 [2024-07-25 10:13:40.513738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.600 [2024-07-25 10:13:40.513746] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.600 [2024-07-25 10:13:40.513756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.600 [2024-07-25 10:13:40.513762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.600 [2024-07-25 10:13:40.513898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.600 [2024-07-25 10:13:40.514014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.600 [2024-07-25 10:13:40.514170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.600 [2024-07-25 10:13:40.514171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.172 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.172 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:02.172 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:02.172 [2024-07-25 10:13:41.296475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.433 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:02.433 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:02.433 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.433 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:02.433 Malloc1 00:25:02.433 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.693 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:02.953 10:13:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.953 [2024-07-25 10:13:42.022616] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.953 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:03.214 10:13:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.474 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:03.474 fio-3.35 00:25:03.474 Starting 1 thread 00:25:03.735 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.281 00:25:06.281 test: (groupid=0, jobs=1): err= 0: pid=1398495: Thu Jul 25 10:13:44 2024 00:25:06.281 read: IOPS=10.8k, BW=42.3MiB/s (44.3MB/s)(84.6MiB/2003msec) 00:25:06.281 slat (usec): min=2, max=292, avg= 2.18, stdev= 2.77 00:25:06.281 clat (usec): min=2158, max=12130, avg=6709.35, stdev=1407.95 00:25:06.281 lat (usec): min=2161, max=12132, avg=6711.53, stdev=1407.97 00:25:06.281 clat percentiles (usec): 00:25:06.281 | 1.00th=[ 4080], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5211], 00:25:06.281 | 30.00th=[ 5800], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7177], 00:25:06.281 | 70.00th=[ 7439], 80.00th=[ 7701], 90.00th=[ 8225], 95.00th=[ 9110], 00:25:06.281 | 99.00th=[10421], 99.50th=[10814], 99.90th=[11469], 99.95th=[11731], 00:25:06.281 | 99.99th=[11994] 00:25:06.281 bw ( KiB/s): min=37912, max=55064, per=99.78%, avg=43178.00, stdev=7978.59, samples=4 00:25:06.281 iops : min= 9478, max=13766, avg=10794.50, stdev=1994.65, samples=4 00:25:06.281 write: IOPS=10.8k, BW=42.2MiB/s (44.2MB/s)(84.4MiB/2003msec); 0 zone resets 00:25:06.281 slat (usec): min=2, max=271, avg= 2.26, stdev= 2.03 00:25:06.281 clat (usec): min=2114, max=8080, avg=5065.00, stdev=1028.61 00:25:06.281 lat (usec): min=2116, max=8112, avg=5067.26, stdev=1028.67 00:25:06.281 clat percentiles (usec): 00:25:06.281 | 1.00th=[ 2802], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3982], 00:25:06.281 | 30.00th=[ 4293], 40.00th=[ 4883], 50.00th=[ 5342], 60.00th=[ 5604], 00:25:06.281 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6456], 00:25:06.281 | 99.00th=[ 6980], 99.50th=[ 7242], 99.90th=[ 7570], 99.95th=[ 7832], 00:25:06.281 | 99.99th=[ 8094] 00:25:06.281 bw ( KiB/s): min=38488, max=55448, per=99.93%, avg=43142.00, stdev=8215.70, samples=4 00:25:06.281 iops : min= 9622, max=13862, avg=10785.50, stdev=2053.92, samples=4 00:25:06.281 lat (msec) : 4=10.55%, 10=88.40%, 20=1.06% 00:25:06.281 cpu : usr=67.53%, sys=25.77%, ctx=17, majf=0, minf=6 00:25:06.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:06.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:06.281 issued rwts: total=21669,21618,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:06.281 00:25:06.281 Run status group 0 (all jobs): 00:25:06.281 READ: bw=42.3MiB/s (44.3MB/s), 42.3MiB/s-42.3MiB/s (44.3MB/s-44.3MB/s), io=84.6MiB (88.8MB), run=2003-2003msec 00:25:06.281 WRITE: bw=42.2MiB/s (44.2MB/s), 42.2MiB/s-42.2MiB/s (44.2MB/s-44.2MB/s), io=84.4MiB (88.5MB), run=2003-2003msec 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:06.281 10:13:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:06.281 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:06.281 fio-3.35 00:25:06.281 Starting 1 thread 00:25:06.281 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.830 00:25:08.830 test: (groupid=0, jobs=1): err= 0: pid=1399318: Thu Jul 25 10:13:47 2024 00:25:08.830 read: IOPS=8525, BW=133MiB/s (140MB/s)(267MiB/2005msec) 00:25:08.830 slat (usec): min=3, max=111, avg= 3.61, stdev= 1.44 00:25:08.830 clat (usec): min=3410, max=30311, avg=9252.28, stdev=2676.82 00:25:08.830 lat (usec): min=3413, max=30315, avg=9255.89, stdev=2677.08 00:25:08.830 clat percentiles (usec): 00:25:08.830 | 1.00th=[ 4621], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6849], 00:25:08.830 | 30.00th=[ 7635], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9765], 00:25:08.830 | 70.00th=[10552], 80.00th=[11469], 90.00th=[12649], 95.00th=[14091], 00:25:08.830 | 99.00th=[16909], 99.50th=[17433], 99.90th=[18220], 99.95th=[18482], 00:25:08.830 | 99.99th=[28181] 00:25:08.830 bw ( KiB/s): min=55648, max=81792, per=51.18%, avg=69816.00, stdev=13333.48, samples=4 00:25:08.830 iops : min= 3478, max= 5112, avg=4363.50, stdev=833.34, samples=4 00:25:08.830 write: IOPS=5051, BW=78.9MiB/s (82.8MB/s)(142MiB/1803msec); 0 zone resets 00:25:08.830 slat (usec): min=39, max=333, avg=41.00, stdev= 7.16 00:25:08.830 clat (usec): min=2391, max=24456, avg=9974.66, stdev=2301.57 00:25:08.830 lat (usec): min=2433, max=24501, avg=10015.66, stdev=2304.58 00:25:08.830 clat percentiles (usec): 00:25:08.830 | 1.00th=[ 6456], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8225], 00:25:08.830 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:25:08.830 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12256], 95.00th=[13829], 00:25:08.830 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21890], 99.95th=[22152], 00:25:08.830 | 99.99th=[24511] 00:25:08.830 bw ( KiB/s): min=59328, max=84512, per=89.87%, avg=72640.00, stdev=13122.99, samples=4 00:25:08.830 iops : min= 3708, max= 5282, avg=4540.00, stdev=820.19, samples=4 00:25:08.830 lat (msec) : 4=0.08%, 10=62.29%, 20=37.37%, 50=0.26% 00:25:08.830 cpu : usr=81.49%, sys=13.57%, ctx=8, majf=0, minf=15 00:25:08.830 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:08.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:08.830 issued rwts: total=17094,9108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.830 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:08.830 00:25:08.830 Run status group 0 (all jobs): 00:25:08.830 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2005-2005msec 00:25:08.830 WRITE: bw=78.9MiB/s (82.8MB/s), 78.9MiB/s-78.9MiB/s (82.8MB/s-82.8MB/s), io=142MiB (149MB), run=1803-1803msec 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:08.830 rmmod nvme_tcp 00:25:08.830 rmmod nvme_fabrics 00:25:08.830 rmmod nvme_keyring 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1397959 ']' 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1397959 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1397959 ']' 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1397959 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1397959 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1397959' 00:25:08.830 killing process with pid 1397959 00:25:08.830 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1397959 00:25:08.831 10:13:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1397959 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.091 10:13:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.636 00:25:11.636 real 0m17.099s 00:25:11.636 user 1m4.218s 00:25:11.636 sys 0m7.224s 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.636 ************************************ 00:25:11.636 END TEST nvmf_fio_host 00:25:11.636 ************************************ 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.636 ************************************ 00:25:11.636 START TEST nvmf_failover 00:25:11.636 ************************************ 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.636 * Looking for test storage... 00:25:11.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:11.636 10:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:18.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:18.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:18.219 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:18.219 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:18.219 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:18.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:18.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:25:18.220 00:25:18.220 --- 10.0.0.2 ping statistics --- 00:25:18.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.220 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:18.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:18.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.391 ms 00:25:18.220 00:25:18.220 --- 10.0.0.1 ping statistics --- 00:25:18.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:18.220 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:18.220 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1403821 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1403821 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1403821 ']' 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:18.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.481 10:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:18.481 [2024-07-25 10:13:57.418704] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:18.481 [2024-07-25 10:13:57.418769] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:18.481 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.481 [2024-07-25 10:13:57.505376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:18.481 [2024-07-25 10:13:57.566163] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:18.481 [2024-07-25 10:13:57.566197] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:18.481 [2024-07-25 10:13:57.566208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:18.481 [2024-07-25 10:13:57.566213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:18.481 [2024-07-25 10:13:57.566217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:18.481 [2024-07-25 10:13:57.566322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:18.481 [2024-07-25 10:13:57.566481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:18.481 [2024-07-25 10:13:57.566484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:19.423 [2024-07-25 10:13:58.370778] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:19.423 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:19.717 Malloc0 00:25:19.717 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:19.717 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:19.978 10:13:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.978 [2024-07-25 10:13:59.053843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:19.978 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:20.240 [2024-07-25 10:13:59.226298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:20.240 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:20.501 [2024-07-25 10:13:59.398815] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1404337 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1404337 /var/tmp/bdevperf.sock 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1404337 ']' 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.501 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.502 10:13:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:21.446 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:21.446 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:21.446 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.706 NVMe0n1 00:25:21.706 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:21.966 00:25:21.966 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1404569 00:25:21.966 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:21.966 10:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:22.904 10:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.164 [2024-07-25 10:14:02.062858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.164 [2024-07-25 10:14:02.062906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062989] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.062999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063040] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063058] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.165 [2024-07-25 10:14:02.063270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063419] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 [2024-07-25 10:14:02.063481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x532b80 is same with the state(5) to be set 00:25:23.166 10:14:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:26.502 10:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:26.502 00:25:26.502 10:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:26.502 [2024-07-25 10:14:05.554360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 [2024-07-25 10:14:05.554484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x533990 is same with the state(5) to be set 00:25:26.502 10:14:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:29.802 10:14:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.802 [2024-07-25 10:14:08.714236] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.802 10:14:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:30.745 10:14:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:31.006 [2024-07-25 10:14:09.890044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890111] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 [2024-07-25 10:14:09.890148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x534870 is same with the state(5) to be set 00:25:31.006 10:14:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1404569 00:25:37.601 0 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1404337 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1404337 ']' 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1404337 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1404337 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1404337' 00:25:37.601 killing process with pid 1404337 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1404337 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1404337 00:25:37.601 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:37.601 [2024-07-25 10:13:59.464960] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:37.601 [2024-07-25 10:13:59.465015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1404337 ] 00:25:37.601 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.601 [2024-07-25 10:13:59.523743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.601 [2024-07-25 10:13:59.587820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.601 Running I/O for 15 seconds... 00:25:37.601 [2024-07-25 10:14:02.064631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.601 [2024-07-25 10:14:02.064902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.601 [2024-07-25 10:14:02.064912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.064920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.064929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.064936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.064947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.064955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.064964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.064971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.064980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.064988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.064998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.602 [2024-07-25 10:14:02.065573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.602 [2024-07-25 10:14:02.065583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.065706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.065725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.065986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.065993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.066010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.066026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.066043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.603 [2024-07-25 10:14:02.066060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.603 [2024-07-25 10:14:02.066237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.603 [2024-07-25 10:14:02.066244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:98128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.604 [2024-07-25 10:14:02.066818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:37.604 [2024-07-25 10:14:02.066847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:37.604 [2024-07-25 10:14:02.066853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98384 len:8 PRP1 0x0 PRP2 0x0 00:25:37.604 [2024-07-25 10:14:02.066864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066900] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18342c0 was disconnected and freed. reset controller. 00:25:37.604 [2024-07-25 10:14:02.066910] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:37.604 [2024-07-25 10:14:02.066929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.604 [2024-07-25 10:14:02.066937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.604 [2024-07-25 10:14:02.066952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.604 [2024-07-25 10:14:02.066960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.604 [2024-07-25 10:14:02.066967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:02.066975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.605 [2024-07-25 10:14:02.066982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:02.066990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.605 [2024-07-25 10:14:02.070597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.605 [2024-07-25 10:14:02.070621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1837ef0 (9): Bad file descriptor 00:25:37.605 [2024-07-25 10:14:02.149083] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.605 [2024-07-25 10:14:05.554934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.554972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.554988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.554997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.605 [2024-07-25 10:14:05.555362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.605 [2024-07-25 10:14:05.555497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.605 [2024-07-25 10:14:05.555507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.555790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:17768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.555985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.555992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.556009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.556026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.556043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.556059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.556077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.606 [2024-07-25 10:14:05.556095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.606 [2024-07-25 10:14:05.556105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.606 [2024-07-25 10:14:05.556113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:17896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.607 [2024-07-25 10:14:05.556765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.607 [2024-07-25 10:14:05.556782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.607 [2024-07-25 10:14:05.556791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.556984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.556993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:05.557151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:37.608 [2024-07-25 10:14:05.557178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:37.608 [2024-07-25 10:14:05.557185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17480 len:8 PRP1 0x0 PRP2 0x0 00:25:37.608 [2024-07-25 10:14:05.557195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557235] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1866c80 was disconnected and freed. reset controller. 00:25:37.608 [2024-07-25 10:14:05.557245] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:37.608 [2024-07-25 10:14:05.557264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.608 [2024-07-25 10:14:05.557272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.608 [2024-07-25 10:14:05.557288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.608 [2024-07-25 10:14:05.557303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.608 [2024-07-25 10:14:05.557318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:05.557326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.608 [2024-07-25 10:14:05.560893] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.608 [2024-07-25 10:14:05.560919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1837ef0 (9): Bad file descriptor 00:25:37.608 [2024-07-25 10:14:05.634791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.608 [2024-07-25 10:14:09.890738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.608 [2024-07-25 10:14:09.890937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.608 [2024-07-25 10:14:09.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.890954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.890961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.890970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.890987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.890994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.609 [2024-07-25 10:14:09.891435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.609 [2024-07-25 10:14:09.891616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.609 [2024-07-25 10:14:09.891625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.610 [2024-07-25 10:14:09.891863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.891992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.891999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.610 [2024-07-25 10:14:09.892181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.610 [2024-07-25 10:14:09.892191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.611 [2024-07-25 10:14:09.892406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.611 [2024-07-25 10:14:09.892853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.611 [2024-07-25 10:14:09.892861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.892870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.612 [2024-07-25 10:14:09.892878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.892887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:37.612 [2024-07-25 10:14:09.892894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.892903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:37.612 [2024-07-25 10:14:09.892911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.892934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:37.612 [2024-07-25 10:14:09.892943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:25:37.612 [2024-07-25 10:14:09.892951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.892961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:37.612 [2024-07-25 10:14:09.892967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:37.612 [2024-07-25 10:14:09.892973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:25:37.612 [2024-07-25 10:14:09.892981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.892989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:37.612 [2024-07-25 10:14:09.892994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:37.612 [2024-07-25 10:14:09.893000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:25:37.612 [2024-07-25 10:14:09.893008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.893045] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18668b0 was disconnected and freed. reset controller. 00:25:37.612 [2024-07-25 10:14:09.893055] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:37.612 [2024-07-25 10:14:09.893076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.612 [2024-07-25 10:14:09.893084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.893093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.612 [2024-07-25 10:14:09.893102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.893110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.612 [2024-07-25 10:14:09.893119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.893127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:37.612 [2024-07-25 10:14:09.893135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:37.612 [2024-07-25 10:14:09.893143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:37.612 [2024-07-25 10:14:09.896720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:37.612 [2024-07-25 10:14:09.896748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1837ef0 (9): Bad file descriptor 00:25:37.612 [2024-07-25 10:14:09.925793] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:37.612 00:25:37.612 Latency(us) 00:25:37.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.612 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:37.612 Verification LBA range: start 0x0 length 0x4000 00:25:37.612 NVMe0n1 : 15.01 11634.30 45.45 427.92 0.00 10582.99 948.91 18022.40 00:25:37.612 =================================================================================================================== 00:25:37.612 Total : 11634.30 45.45 427.92 0.00 10582.99 948.91 18022.40 00:25:37.612 Received shutdown signal, test time was about 15.000000 seconds 00:25:37.612 00:25:37.612 Latency(us) 00:25:37.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:37.612 =================================================================================================================== 00:25:37.612 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1407484 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1407484 /var/tmp/bdevperf.sock 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1407484 ']' 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:37.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:37.612 10:14:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:38.184 10:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:38.184 10:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:38.184 10:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:38.184 [2024-07-25 10:14:17.221401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.184 10:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:38.444 [2024-07-25 10:14:17.389808] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:38.444 10:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:38.705 NVMe0n1 00:25:38.705 10:14:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.277 00:25:39.278 10:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.538 00:25:39.538 10:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.538 10:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:39.800 10:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:39.800 10:14:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:43.171 10:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:43.171 10:14:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:43.171 10:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.171 10:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1408709 00:25:43.171 10:14:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1408709 00:25:44.114 0 00:25:44.114 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:44.114 [2024-07-25 10:14:16.310086] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:44.114 [2024-07-25 10:14:16.310142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407484 ] 00:25:44.114 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.114 [2024-07-25 10:14:16.369195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.114 [2024-07-25 10:14:16.432217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.114 [2024-07-25 10:14:18.877766] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:44.114 [2024-07-25 10:14:18.877811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.114 [2024-07-25 10:14:18.877822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.114 [2024-07-25 10:14:18.877830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.114 [2024-07-25 10:14:18.877838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.114 [2024-07-25 10:14:18.877846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.114 [2024-07-25 10:14:18.877853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.114 [2024-07-25 10:14:18.877861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.114 [2024-07-25 10:14:18.877868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.114 [2024-07-25 10:14:18.877875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:44.114 [2024-07-25 10:14:18.877901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:44.114 [2024-07-25 10:14:18.877915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10a2ef0 (9): Bad file descriptor 00:25:44.114 [2024-07-25 10:14:19.010455] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:44.114 Running I/O for 1 seconds... 00:25:44.114 00:25:44.114 Latency(us) 00:25:44.114 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.114 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:44.114 Verification LBA range: start 0x0 length 0x4000 00:25:44.114 NVMe0n1 : 1.00 11338.11 44.29 0.00 0.00 11237.16 1706.67 16602.45 00:25:44.114 =================================================================================================================== 00:25:44.114 Total : 11338.11 44.29 0.00 0.00 11237.16 1706.67 16602.45 00:25:44.114 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.114 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:44.375 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:44.635 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.635 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:44.635 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:44.894 10:14:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:48.197 10:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:48.197 10:14:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1407484 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1407484 ']' 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1407484 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1407484 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1407484' 00:25:48.197 killing process with pid 1407484 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1407484 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1407484 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:48.197 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:48.458 rmmod nvme_tcp 00:25:48.458 rmmod nvme_fabrics 00:25:48.458 rmmod nvme_keyring 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1403821 ']' 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1403821 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1403821 ']' 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1403821 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:48.458 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1403821 00:25:48.459 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:48.459 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:48.459 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1403821' 00:25:48.459 killing process with pid 1403821 00:25:48.459 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1403821 00:25:48.459 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1403821 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:48.720 10:14:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:50.636 00:25:50.636 real 0m39.499s 00:25:50.636 user 2m2.927s 00:25:50.636 sys 0m7.874s 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:50.636 ************************************ 00:25:50.636 END TEST nvmf_failover 00:25:50.636 ************************************ 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:50.636 10:14:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.898 ************************************ 00:25:50.898 START TEST nvmf_host_discovery 00:25:50.898 ************************************ 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:50.898 * Looking for test storage... 00:25:50.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:50.898 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:59.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:59.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:59.048 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:59.048 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:59.048 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:59.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.705 ms 00:25:59.049 00:25:59.049 --- 10.0.0.2 ping statistics --- 00:25:59.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.049 rtt min/avg/max/mdev = 0.705/0.705/0.705/0.000 ms 00:25:59.049 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:59.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:25:59.049 00:25:59.049 --- 10.0.0.1 ping statistics --- 00:25:59.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.049 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1413724 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1413724 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1413724 ']' 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 [2024-07-25 10:14:37.107175] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:59.049 [2024-07-25 10:14:37.107260] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.049 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.049 [2024-07-25 10:14:37.196360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.049 [2024-07-25 10:14:37.289524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:59.049 [2024-07-25 10:14:37.289581] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:59.049 [2024-07-25 10:14:37.289589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:59.049 [2024-07-25 10:14:37.289596] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:59.049 [2024-07-25 10:14:37.289602] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:59.049 [2024-07-25 10:14:37.289626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 [2024-07-25 10:14:37.941760] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 [2024-07-25 10:14:37.954061] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 null0 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 null1 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1414066 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1414066 /tmp/host.sock 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 1414066 ']' 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:59.049 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:59.049 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:59.050 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.050 [2024-07-25 10:14:38.048803] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:59.050 [2024-07-25 10:14:38.048871] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1414066 ] 00:25:59.050 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.050 [2024-07-25 10:14:38.115028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.310 [2024-07-25 10:14:38.188480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.882 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:59.882 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:59.882 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.883 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:59.883 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:59.883 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 [2024-07-25 10:14:39.193100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:00.144 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:26:00.405 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:00.979 [2024-07-25 10:14:39.840538] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:00.979 [2024-07-25 10:14:39.840560] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:00.979 [2024-07-25 10:14:39.840575] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.979 [2024-07-25 10:14:39.928869] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:00.979 [2024-07-25 10:14:40.074828] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:00.979 [2024-07-25 10:14:40.074856] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.552 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.814 [2024-07-25 10:14:40.757283] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:01.814 [2024-07-25 10:14:40.758348] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:01.814 [2024-07-25 10:14:40.758378] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.814 [2024-07-25 10:14:40.847064] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:01.814 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:01.815 [2024-07-25 10:14:40.909868] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.815 [2024-07-25 10:14:40.909890] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:01.815 [2024-07-25 10:14:40.909895] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:01.815 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.199 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.199 [2024-07-25 10:14:42.033084] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:03.199 [2024-07-25 10:14:42.033107] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:03.199 [2024-07-25 10:14:42.036809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.199 [2024-07-25 10:14:42.036828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.199 [2024-07-25 10:14:42.036838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.199 [2024-07-25 10:14:42.036845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.199 [2024-07-25 10:14:42.036853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.199 [2024-07-25 10:14:42.036861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.199 [2024-07-25 10:14:42.036869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:03.199 [2024-07-25 10:14:42.036876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:03.199 [2024-07-25 10:14:42.036883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:03.199 [2024-07-25 10:14:42.046823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.199 [2024-07-25 10:14:42.056862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.199 [2024-07-25 10:14:42.057147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-07-25 10:14:42.057169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.199 [2024-07-25 10:14:42.057177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.199 [2024-07-25 10:14:42.057192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.199 [2024-07-25 10:14:42.057228] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.199 [2024-07-25 10:14:42.057237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.199 [2024-07-25 10:14:42.057245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.199 [2024-07-25 10:14:42.057259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.199 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.199 [2024-07-25 10:14:42.066920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.199 [2024-07-25 10:14:42.067510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-07-25 10:14:42.067549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.199 [2024-07-25 10:14:42.067559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.199 [2024-07-25 10:14:42.067589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.199 [2024-07-25 10:14:42.067614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.199 [2024-07-25 10:14:42.067622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.199 [2024-07-25 10:14:42.067629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.199 [2024-07-25 10:14:42.067645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.199 [2024-07-25 10:14:42.076974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.199 [2024-07-25 10:14:42.077498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-07-25 10:14:42.077536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.199 [2024-07-25 10:14:42.077548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.199 [2024-07-25 10:14:42.077569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.199 [2024-07-25 10:14:42.077581] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.199 [2024-07-25 10:14:42.077588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.199 [2024-07-25 10:14:42.077596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.199 [2024-07-25 10:14:42.077610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.199 [2024-07-25 10:14:42.087030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.199 [2024-07-25 10:14:42.087601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.199 [2024-07-25 10:14:42.087639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.087649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.087668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.087693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.087701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.087715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.087731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:03.200 [2024-07-25 10:14:42.097089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:03.200 [2024-07-25 10:14:42.097341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.097355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.097363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.097375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.097385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.097393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.097400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.097411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.200 [2024-07-25 10:14:42.107146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 [2024-07-25 10:14:42.107633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.107647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.107655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.107666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.107676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.107682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.107689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.107704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 [2024-07-25 10:14:42.117205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 [2024-07-25 10:14:42.117677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.117690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.117697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.117708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.117718] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.117724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.117731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.117741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 [2024-07-25 10:14:42.127256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 [2024-07-25 10:14:42.127725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.127738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.127745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.127756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.127766] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.127772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.127779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.127790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.200 [2024-07-25 10:14:42.137312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 [2024-07-25 10:14:42.137797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.137809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.137816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.137827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.137837] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.137843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.137850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.137861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 [2024-07-25 10:14:42.147364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 [2024-07-25 10:14:42.147805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.147820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.200 [2024-07-25 10:14:42.147827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.200 [2024-07-25 10:14:42.147838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.200 [2024-07-25 10:14:42.147848] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.200 [2024-07-25 10:14:42.147854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.200 [2024-07-25 10:14:42.147861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.200 [2024-07-25 10:14:42.147871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:03.200 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:03.200 [2024-07-25 10:14:42.157416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:03.200 [2024-07-25 10:14:42.157898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.200 [2024-07-25 10:14:42.157911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb79d0 with addr=10.0.0.2, port=4420 00:26:03.201 [2024-07-25 10:14:42.157918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb79d0 is same with the state(5) to be set 00:26:03.201 [2024-07-25 10:14:42.157929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbb79d0 (9): Bad file descriptor 00:26:03.201 [2024-07-25 10:14:42.157941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:03.201 [2024-07-25 10:14:42.157947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:03.201 [2024-07-25 10:14:42.157954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:03.201 [2024-07-25 10:14:42.157964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:03.201 [2024-07-25 10:14:42.164097] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:03.201 [2024-07-25 10:14:42.164116] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:03.201 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:03.201 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:03.201 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.144 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:04.444 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.828 [2024-07-25 10:14:44.534515] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:05.828 [2024-07-25 10:14:44.534533] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:05.828 [2024-07-25 10:14:44.534546] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:05.828 [2024-07-25 10:14:44.622809] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:05.828 [2024-07-25 10:14:44.728779] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:05.828 [2024-07-25 10:14:44.728807] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.828 request: 00:26:05.828 { 00:26:05.828 "name": "nvme", 00:26:05.828 "trtype": "tcp", 00:26:05.828 "traddr": "10.0.0.2", 00:26:05.828 "adrfam": "ipv4", 00:26:05.828 "trsvcid": "8009", 00:26:05.828 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:05.828 "wait_for_attach": true, 00:26:05.828 "method": "bdev_nvme_start_discovery", 00:26:05.828 "req_id": 1 00:26:05.828 } 00:26:05.828 Got JSON-RPC error response 00:26:05.828 response: 00:26:05.828 { 00:26:05.828 "code": -17, 00:26:05.828 "message": "File exists" 00:26:05.828 } 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.828 request: 00:26:05.828 { 00:26:05.828 "name": "nvme_second", 00:26:05.828 "trtype": "tcp", 00:26:05.828 "traddr": "10.0.0.2", 00:26:05.828 "adrfam": "ipv4", 00:26:05.828 "trsvcid": "8009", 00:26:05.828 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:05.828 "wait_for_attach": true, 00:26:05.828 "method": "bdev_nvme_start_discovery", 00:26:05.828 "req_id": 1 00:26:05.828 } 00:26:05.828 Got JSON-RPC error response 00:26:05.828 response: 00:26:05.828 { 00:26:05.828 "code": -17, 00:26:05.828 "message": "File exists" 00:26:05.828 } 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:05.828 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:05.829 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:05.829 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:05.829 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:05.829 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:05.829 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:05.829 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:06.089 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:07.031 [2024-07-25 10:14:46.001658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.031 [2024-07-25 10:14:46.001687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc4500 with addr=10.0.0.2, port=8010 00:26:07.031 [2024-07-25 10:14:46.001700] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:07.031 [2024-07-25 10:14:46.001707] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:07.031 [2024-07-25 10:14:46.001718] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:07.971 [2024-07-25 10:14:47.003990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.971 [2024-07-25 10:14:47.004012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbc4500 with addr=10.0.0.2, port=8010 00:26:07.971 [2024-07-25 10:14:47.004023] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:07.971 [2024-07-25 10:14:47.004029] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:07.971 [2024-07-25 10:14:47.004035] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:08.913 [2024-07-25 10:14:48.005885] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:08.913 request: 00:26:08.913 { 00:26:08.913 "name": "nvme_second", 00:26:08.913 "trtype": "tcp", 00:26:08.913 "traddr": "10.0.0.2", 00:26:08.913 "adrfam": "ipv4", 00:26:08.913 "trsvcid": "8010", 00:26:08.913 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:08.913 "wait_for_attach": false, 00:26:08.913 "attach_timeout_ms": 3000, 00:26:08.913 "method": "bdev_nvme_start_discovery", 00:26:08.913 "req_id": 1 00:26:08.913 } 00:26:08.913 Got JSON-RPC error response 00:26:08.913 response: 00:26:08.913 { 00:26:08.913 "code": -110, 00:26:08.913 "message": "Connection timed out" 00:26:08.913 } 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:08.913 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1414066 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:09.174 rmmod nvme_tcp 00:26:09.174 rmmod nvme_fabrics 00:26:09.174 rmmod nvme_keyring 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1413724 ']' 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1413724 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 1413724 ']' 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 1413724 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1413724 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1413724' 00:26:09.174 killing process with pid 1413724 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 1413724 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 1413724 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:09.174 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:09.175 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:09.175 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.175 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.175 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:11.724 00:26:11.724 real 0m20.563s 00:26:11.724 user 0m25.086s 00:26:11.724 sys 0m6.742s 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:11.724 ************************************ 00:26:11.724 END TEST nvmf_host_discovery 00:26:11.724 ************************************ 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:11.724 ************************************ 00:26:11.724 START TEST nvmf_host_multipath_status 00:26:11.724 ************************************ 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:11.724 * Looking for test storage... 00:26:11.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:11.724 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:11.725 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:19.872 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:19.872 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:19.872 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:19.872 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.872 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:26:19.873 00:26:19.873 --- 10.0.0.2 ping statistics --- 00:26:19.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.873 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:26:19.873 00:26:19.873 --- 10.0.0.1 ping statistics --- 00:26:19.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.873 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1420247 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1420247 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1420247 ']' 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:19.873 10:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.873 [2024-07-25 10:14:57.871151] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:19.873 [2024-07-25 10:14:57.871225] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.873 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.873 [2024-07-25 10:14:57.941415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:19.873 [2024-07-25 10:14:58.015473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.873 [2024-07-25 10:14:58.015511] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.873 [2024-07-25 10:14:58.015518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.873 [2024-07-25 10:14:58.015525] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.873 [2024-07-25 10:14:58.015531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.873 [2024-07-25 10:14:58.015667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.873 [2024-07-25 10:14:58.015668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1420247 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:19.873 [2024-07-25 10:14:58.839862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.873 10:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:20.134 Malloc0 00:26:20.134 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:20.134 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.395 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.395 [2024-07-25 10:14:59.472076] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.395 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:20.656 [2024-07-25 10:14:59.640476] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1420607 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1420607 /var/tmp/bdevperf.sock 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 1420607 ']' 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:20.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:20.656 10:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:21.600 10:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:21.600 10:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:26:21.600 10:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:21.600 10:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:21.861 Nvme0n1 00:26:21.861 10:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:22.432 Nvme0n1 00:26:22.432 10:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:22.432 10:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:24.347 10:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:24.347 10:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:24.606 10:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:24.606 10:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:25.987 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:25.987 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.988 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.988 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.988 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.988 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.988 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.988 10:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.988 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.988 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.988 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.988 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:26.248 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.509 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.509 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.509 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:26.510 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.510 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.809 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.809 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:26.809 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:26.809 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:27.068 10:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:28.011 10:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:28.011 10:15:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:28.011 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.011 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.272 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:28.533 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.533 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:28.533 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.533 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.794 10:15:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:29.055 10:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.055 10:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:29.055 10:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:29.315 10:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:29.315 10:15:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:30.257 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:30.257 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:30.516 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.516 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.516 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.516 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:30.516 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.516 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.776 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.776 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.776 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.776 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:31.036 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.036 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:31.036 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.036 10:15:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.036 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.036 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.036 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.036 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:31.295 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:31.555 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:31.815 10:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:32.757 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:32.757 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:32.757 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.757 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.018 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.018 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:33.018 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.018 10:15:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.018 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.018 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.018 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.018 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.278 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.278 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.278 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.278 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.540 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:33.801 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.801 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:33.801 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:33.801 10:15:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:34.062 10:15:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:35.004 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:35.004 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:35.004 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.004 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:35.266 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.266 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:35.266 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.266 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.527 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:35.787 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:35.787 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:35.788 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:35.788 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.048 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.048 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:36.048 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.048 10:15:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.048 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:36.048 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:36.048 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:36.309 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:36.570 10:15:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.515 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.775 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.775 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.775 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.775 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.036 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.036 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.036 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.036 10:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.036 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.036 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:38.036 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.036 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.297 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.297 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:38.297 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.297 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.560 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.560 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:38.560 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:38.560 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:38.863 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:38.863 10:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:40.249 10:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:40.249 10:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:40.249 10:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.249 10:15:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.249 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.509 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.509 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.509 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.509 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.768 10:15:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.028 10:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.028 10:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:41.028 10:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:41.288 10:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:41.288 10:15:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:42.261 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:42.261 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:42.261 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.261 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:42.522 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:42.522 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:42.522 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.522 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:42.783 10:15:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:43.043 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.043 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:43.043 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.043 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:43.304 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:43.565 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:43.826 10:15:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:44.774 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:44.774 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:44.774 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:44.774 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:45.034 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.034 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:45.034 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.034 10:15:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:45.034 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.034 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:45.034 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.034 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:45.295 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.295 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:45.295 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.295 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:45.556 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:45.816 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:45.816 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:45.817 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:46.077 10:15:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:46.077 10:15:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.465 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.727 10:15:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:47.987 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:47.987 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:47.987 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.987 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:48.250 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.250 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1420607 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1420607 ']' 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1420607 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420607 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420607' 00:26:48.251 killing process with pid 1420607 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1420607 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1420607 00:26:48.251 Connection closed with partial response: 00:26:48.251 00:26:48.251 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1420607 00:26:48.251 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:48.251 [2024-07-25 10:14:59.725473] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:48.251 [2024-07-25 10:14:59.725536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1420607 ] 00:26:48.251 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.251 [2024-07-25 10:14:59.776187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:48.251 [2024-07-25 10:14:59.828154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.251 Running I/O for 90 seconds... 00:26:48.251 [2024-07-25 10:15:12.910894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:73856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.910928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.910960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:73864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.910967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.910978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:73872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.910984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.910994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:73880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.910999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:73896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:73592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.251 [2024-07-25 10:15:12.911064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:73600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.251 [2024-07-25 10:15:12.911079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:73920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:73928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:73936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:73944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:73952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:73960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:48.251 [2024-07-25 10:15:12.911548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:74080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.251 [2024-07-25 10:15:12.911553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:74152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:74192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:74232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.911989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.911994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.252 [2024-07-25 10:15:12.912858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:48.252 [2024-07-25 10:15:12.912873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:73608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.912892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:73616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.912912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:73624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.912932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:73632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.912952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:73640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.912971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:73648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.912991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:73656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.912996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:73664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:73672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:73680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:73696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:73704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:73720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.253 [2024-07-25 10:15:12.913185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.913988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.913993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.253 [2024-07-25 10:15:12.914267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:48.253 [2024-07-25 10:15:12.914283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:12.914288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:12.914311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:12.914333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:12.914354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:73752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:73760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:73784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:12.914550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:12.914719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:12.914725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.117876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.117893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.117909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.117960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.117988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:48.254 [2024-07-25 10:15:25.117993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.118003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.118009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.118021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.118027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.118038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.118044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.118057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.118062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:48.254 [2024-07-25 10:15:25.118073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:48.254 [2024-07-25 10:15:25.118081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:48.254 Received shutdown signal, test time was about 25.783080 seconds 00:26:48.254 00:26:48.255 Latency(us) 00:26:48.255 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:48.255 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:48.255 Verification LBA range: start 0x0 length 0x4000 00:26:48.255 Nvme0n1 : 25.78 10991.96 42.94 0.00 0.00 11626.16 276.48 3019898.88 00:26:48.255 =================================================================================================================== 00:26:48.255 Total : 10991.96 42.94 0.00 0.00 11626.16 276.48 3019898.88 00:26:48.255 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.516 rmmod nvme_tcp 00:26:48.516 rmmod nvme_fabrics 00:26:48.516 rmmod nvme_keyring 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1420247 ']' 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1420247 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 1420247 ']' 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 1420247 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1420247 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1420247' 00:26:48.516 killing process with pid 1420247 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 1420247 00:26:48.516 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 1420247 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:48.777 10:15:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.325 00:26:51.325 real 0m39.429s 00:26:51.325 user 1m41.537s 00:26:51.325 sys 0m10.918s 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:51.325 ************************************ 00:26:51.325 END TEST nvmf_host_multipath_status 00:26:51.325 ************************************ 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.325 ************************************ 00:26:51.325 START TEST nvmf_discovery_remove_ifc 00:26:51.325 ************************************ 00:26:51.325 10:15:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:51.325 * Looking for test storage... 00:26:51.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.325 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.326 10:15:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:57.963 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:57.963 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:57.963 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:57.963 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:57.963 10:15:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:58.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.797 ms 00:26:58.226 00:26:58.226 --- 10.0.0.2 ping statistics --- 00:26:58.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.226 rtt min/avg/max/mdev = 0.797/0.797/0.797/0.000 ms 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:26:58.226 00:26:58.226 --- 10.0.0.1 ping statistics --- 00:26:58.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.226 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1431042 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1431042 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1431042 ']' 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:58.226 10:15:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.488 [2024-07-25 10:15:37.390246] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:58.488 [2024-07-25 10:15:37.390312] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.488 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.488 [2024-07-25 10:15:37.476867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.488 [2024-07-25 10:15:37.568749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.488 [2024-07-25 10:15:37.568810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.488 [2024-07-25 10:15:37.568819] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:58.488 [2024-07-25 10:15:37.568825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:58.488 [2024-07-25 10:15:37.568831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.488 [2024-07-25 10:15:37.568856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:59.060 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:59.060 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:59.060 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:59.060 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:59.060 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.322 [2024-07-25 10:15:38.232445] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.322 [2024-07-25 10:15:38.240657] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:59.322 null0 00:26:59.322 [2024-07-25 10:15:38.272629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1431085 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1431085 /tmp/host.sock 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 1431085 ']' 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:59.322 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:59.322 10:15:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.322 [2024-07-25 10:15:38.347717] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:59.322 [2024-07-25 10:15:38.347784] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431085 ] 00:26:59.322 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.322 [2024-07-25 10:15:38.413366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.583 [2024-07-25 10:15:38.488364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.155 10:15:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.391 [2024-07-25 10:15:40.212493] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:01.391 [2024-07-25 10:15:40.212521] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:01.391 [2024-07-25 10:15:40.212536] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.391 [2024-07-25 10:15:40.341935] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:01.391 [2024-07-25 10:15:40.443758] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:01.391 [2024-07-25 10:15:40.443807] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:01.391 [2024-07-25 10:15:40.443831] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:01.391 [2024-07-25 10:15:40.443845] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:01.391 [2024-07-25 10:15:40.443868] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.391 [2024-07-25 10:15:40.450874] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10247f0 was disconnected and freed. delete nvme_qpair. 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.391 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.392 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.392 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.392 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:01.392 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:01.392 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.651 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.652 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:01.652 10:15:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:02.594 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.855 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:02.855 10:15:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:03.797 10:15:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:04.741 10:15:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:06.127 10:15:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:07.071 [2024-07-25 10:15:45.884297] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:07.071 [2024-07-25 10:15:45.884339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.071 [2024-07-25 10:15:45.884352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.071 [2024-07-25 10:15:45.884361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.071 [2024-07-25 10:15:45.884369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.071 [2024-07-25 10:15:45.884377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.071 [2024-07-25 10:15:45.884384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.071 [2024-07-25 10:15:45.884391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.071 [2024-07-25 10:15:45.884398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.071 [2024-07-25 10:15:45.884406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:07.071 [2024-07-25 10:15:45.884414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:07.071 [2024-07-25 10:15:45.884421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb060 is same with the state(5) to be set 00:27:07.071 [2024-07-25 10:15:45.894317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb060 (9): Bad file descriptor 00:27:07.071 [2024-07-25 10:15:45.904356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:07.071 10:15:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.015 [2024-07-25 10:15:46.961239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:08.015 [2024-07-25 10:15:46.961278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfeb060 with addr=10.0.0.2, port=4420 00:27:08.015 [2024-07-25 10:15:46.961289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeb060 is same with the state(5) to be set 00:27:08.015 [2024-07-25 10:15:46.961309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfeb060 (9): Bad file descriptor 00:27:08.015 [2024-07-25 10:15:46.961684] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:08.015 [2024-07-25 10:15:46.961713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:08.015 [2024-07-25 10:15:46.961720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:08.015 [2024-07-25 10:15:46.961729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:08.015 [2024-07-25 10:15:46.961743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.015 [2024-07-25 10:15:46.961751] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:08.015 10:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.015 10:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:08.015 10:15:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:08.959 [2024-07-25 10:15:47.964128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:08.959 [2024-07-25 10:15:47.964148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:08.959 [2024-07-25 10:15:47.964155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:08.959 [2024-07-25 10:15:47.964162] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:08.959 [2024-07-25 10:15:47.964174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:08.959 [2024-07-25 10:15:47.964192] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:08.959 [2024-07-25 10:15:47.964217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.959 [2024-07-25 10:15:47.964227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.959 [2024-07-25 10:15:47.964236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.959 [2024-07-25 10:15:47.964245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.959 [2024-07-25 10:15:47.964253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.959 [2024-07-25 10:15:47.964260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.959 [2024-07-25 10:15:47.964268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.959 [2024-07-25 10:15:47.964275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.959 [2024-07-25 10:15:47.964284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:08.959 [2024-07-25 10:15:47.964291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.959 [2024-07-25 10:15:47.964298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:08.959 [2024-07-25 10:15:47.964714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea4c0 (9): Bad file descriptor 00:27:08.959 [2024-07-25 10:15:47.965726] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:08.959 [2024-07-25 10:15:47.965739] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:08.960 10:15:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:08.960 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.960 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:08.960 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:08.960 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:09.221 10:15:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:10.164 10:15:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:11.106 [2024-07-25 10:15:49.976042] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:11.106 [2024-07-25 10:15:49.976059] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:11.106 [2024-07-25 10:15:49.976072] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:11.107 [2024-07-25 10:15:50.065372] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:11.107 [2024-07-25 10:15:50.129378] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:11.107 [2024-07-25 10:15:50.129420] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:11.107 [2024-07-25 10:15:50.129441] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:11.107 [2024-07-25 10:15:50.129455] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:11.107 [2024-07-25 10:15:50.129463] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:11.107 [2024-07-25 10:15:50.134991] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xff1e50 was disconnected and freed. delete nvme_qpair. 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1431085 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1431085 ']' 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1431085 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431085 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431085' 00:27:11.367 killing process with pid 1431085 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1431085 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1431085 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:11.367 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:11.655 rmmod nvme_tcp 00:27:11.655 rmmod nvme_fabrics 00:27:11.655 rmmod nvme_keyring 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1431042 ']' 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1431042 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 1431042 ']' 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 1431042 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1431042 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1431042' 00:27:11.655 killing process with pid 1431042 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 1431042 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 1431042 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:11.655 10:15:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:14.204 00:27:14.204 real 0m22.885s 00:27:14.204 user 0m27.004s 00:27:14.204 sys 0m6.727s 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.204 ************************************ 00:27:14.204 END TEST nvmf_discovery_remove_ifc 00:27:14.204 ************************************ 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.204 ************************************ 00:27:14.204 START TEST nvmf_identify_kernel_target 00:27:14.204 ************************************ 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:14.204 * Looking for test storage... 00:27:14.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:14.204 10:15:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:20.828 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:20.828 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.828 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:20.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:20.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:27:20.829 00:27:20.829 --- 10.0.0.2 ping statistics --- 00:27:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.829 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.401 ms 00:27:20.829 00:27:20.829 --- 10.0.0.1 ping statistics --- 00:27:20.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.829 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:20.829 10:15:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.169 Waiting for block devices as requested 00:27:24.169 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.169 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.169 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.169 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:24.429 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:24.430 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:24.430 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:24.689 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:24.689 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:24.949 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:24.949 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:24.949 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:24.949 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:25.209 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:25.209 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:25.209 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:25.209 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:25.469 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:25.731 No valid GPT data, bailing 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:25.731 00:27:25.731 Discovery Log Number of Records 2, Generation counter 2 00:27:25.731 =====Discovery Log Entry 0====== 00:27:25.731 trtype: tcp 00:27:25.731 adrfam: ipv4 00:27:25.731 subtype: current discovery subsystem 00:27:25.731 treq: not specified, sq flow control disable supported 00:27:25.731 portid: 1 00:27:25.731 trsvcid: 4420 00:27:25.731 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:25.731 traddr: 10.0.0.1 00:27:25.731 eflags: none 00:27:25.731 sectype: none 00:27:25.731 =====Discovery Log Entry 1====== 00:27:25.731 trtype: tcp 00:27:25.731 adrfam: ipv4 00:27:25.731 subtype: nvme subsystem 00:27:25.731 treq: not specified, sq flow control disable supported 00:27:25.731 portid: 1 00:27:25.731 trsvcid: 4420 00:27:25.731 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:25.731 traddr: 10.0.0.1 00:27:25.731 eflags: none 00:27:25.731 sectype: none 00:27:25.731 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:25.731 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:25.731 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.731 ===================================================== 00:27:25.731 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:25.731 ===================================================== 00:27:25.731 Controller Capabilities/Features 00:27:25.731 ================================ 00:27:25.731 Vendor ID: 0000 00:27:25.731 Subsystem Vendor ID: 0000 00:27:25.731 Serial Number: 772318b93dccf5efc004 00:27:25.731 Model Number: Linux 00:27:25.731 Firmware Version: 6.7.0-68 00:27:25.731 Recommended Arb Burst: 0 00:27:25.731 IEEE OUI Identifier: 00 00 00 00:27:25.731 Multi-path I/O 00:27:25.731 May have multiple subsystem ports: No 00:27:25.731 May have multiple controllers: No 00:27:25.731 Associated with SR-IOV VF: No 00:27:25.731 Max Data Transfer Size: Unlimited 00:27:25.731 Max Number of Namespaces: 0 00:27:25.731 Max Number of I/O Queues: 1024 00:27:25.731 NVMe Specification Version (VS): 1.3 00:27:25.731 NVMe Specification Version (Identify): 1.3 00:27:25.731 Maximum Queue Entries: 1024 00:27:25.731 Contiguous Queues Required: No 00:27:25.731 Arbitration Mechanisms Supported 00:27:25.731 Weighted Round Robin: Not Supported 00:27:25.731 Vendor Specific: Not Supported 00:27:25.731 Reset Timeout: 7500 ms 00:27:25.731 Doorbell Stride: 4 bytes 00:27:25.731 NVM Subsystem Reset: Not Supported 00:27:25.731 Command Sets Supported 00:27:25.731 NVM Command Set: Supported 00:27:25.731 Boot Partition: Not Supported 00:27:25.731 Memory Page Size Minimum: 4096 bytes 00:27:25.731 Memory Page Size Maximum: 4096 bytes 00:27:25.731 Persistent Memory Region: Not Supported 00:27:25.731 Optional Asynchronous Events Supported 00:27:25.731 Namespace Attribute Notices: Not Supported 00:27:25.731 Firmware Activation Notices: Not Supported 00:27:25.731 ANA Change Notices: Not Supported 00:27:25.731 PLE Aggregate Log Change Notices: Not Supported 00:27:25.731 LBA Status Info Alert Notices: Not Supported 00:27:25.731 EGE Aggregate Log Change Notices: Not Supported 00:27:25.731 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.731 Zone Descriptor Change Notices: Not Supported 00:27:25.731 Discovery Log Change Notices: Supported 00:27:25.731 Controller Attributes 00:27:25.731 128-bit Host Identifier: Not Supported 00:27:25.731 Non-Operational Permissive Mode: Not Supported 00:27:25.731 NVM Sets: Not Supported 00:27:25.731 Read Recovery Levels: Not Supported 00:27:25.731 Endurance Groups: Not Supported 00:27:25.731 Predictable Latency Mode: Not Supported 00:27:25.731 Traffic Based Keep ALive: Not Supported 00:27:25.731 Namespace Granularity: Not Supported 00:27:25.731 SQ Associations: Not Supported 00:27:25.731 UUID List: Not Supported 00:27:25.731 Multi-Domain Subsystem: Not Supported 00:27:25.731 Fixed Capacity Management: Not Supported 00:27:25.731 Variable Capacity Management: Not Supported 00:27:25.731 Delete Endurance Group: Not Supported 00:27:25.731 Delete NVM Set: Not Supported 00:27:25.731 Extended LBA Formats Supported: Not Supported 00:27:25.731 Flexible Data Placement Supported: Not Supported 00:27:25.731 00:27:25.731 Controller Memory Buffer Support 00:27:25.731 ================================ 00:27:25.731 Supported: No 00:27:25.731 00:27:25.731 Persistent Memory Region Support 00:27:25.731 ================================ 00:27:25.731 Supported: No 00:27:25.731 00:27:25.731 Admin Command Set Attributes 00:27:25.731 ============================ 00:27:25.731 Security Send/Receive: Not Supported 00:27:25.731 Format NVM: Not Supported 00:27:25.731 Firmware Activate/Download: Not Supported 00:27:25.731 Namespace Management: Not Supported 00:27:25.731 Device Self-Test: Not Supported 00:27:25.731 Directives: Not Supported 00:27:25.731 NVMe-MI: Not Supported 00:27:25.731 Virtualization Management: Not Supported 00:27:25.731 Doorbell Buffer Config: Not Supported 00:27:25.731 Get LBA Status Capability: Not Supported 00:27:25.731 Command & Feature Lockdown Capability: Not Supported 00:27:25.731 Abort Command Limit: 1 00:27:25.731 Async Event Request Limit: 1 00:27:25.731 Number of Firmware Slots: N/A 00:27:25.731 Firmware Slot 1 Read-Only: N/A 00:27:25.731 Firmware Activation Without Reset: N/A 00:27:25.731 Multiple Update Detection Support: N/A 00:27:25.731 Firmware Update Granularity: No Information Provided 00:27:25.731 Per-Namespace SMART Log: No 00:27:25.732 Asymmetric Namespace Access Log Page: Not Supported 00:27:25.732 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:25.732 Command Effects Log Page: Not Supported 00:27:25.732 Get Log Page Extended Data: Supported 00:27:25.732 Telemetry Log Pages: Not Supported 00:27:25.732 Persistent Event Log Pages: Not Supported 00:27:25.732 Supported Log Pages Log Page: May Support 00:27:25.732 Commands Supported & Effects Log Page: Not Supported 00:27:25.732 Feature Identifiers & Effects Log Page:May Support 00:27:25.732 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.732 Data Area 4 for Telemetry Log: Not Supported 00:27:25.732 Error Log Page Entries Supported: 1 00:27:25.732 Keep Alive: Not Supported 00:27:25.732 00:27:25.732 NVM Command Set Attributes 00:27:25.732 ========================== 00:27:25.732 Submission Queue Entry Size 00:27:25.732 Max: 1 00:27:25.732 Min: 1 00:27:25.732 Completion Queue Entry Size 00:27:25.732 Max: 1 00:27:25.732 Min: 1 00:27:25.732 Number of Namespaces: 0 00:27:25.732 Compare Command: Not Supported 00:27:25.732 Write Uncorrectable Command: Not Supported 00:27:25.732 Dataset Management Command: Not Supported 00:27:25.732 Write Zeroes Command: Not Supported 00:27:25.732 Set Features Save Field: Not Supported 00:27:25.732 Reservations: Not Supported 00:27:25.732 Timestamp: Not Supported 00:27:25.732 Copy: Not Supported 00:27:25.732 Volatile Write Cache: Not Present 00:27:25.732 Atomic Write Unit (Normal): 1 00:27:25.732 Atomic Write Unit (PFail): 1 00:27:25.732 Atomic Compare & Write Unit: 1 00:27:25.732 Fused Compare & Write: Not Supported 00:27:25.732 Scatter-Gather List 00:27:25.732 SGL Command Set: Supported 00:27:25.732 SGL Keyed: Not Supported 00:27:25.732 SGL Bit Bucket Descriptor: Not Supported 00:27:25.732 SGL Metadata Pointer: Not Supported 00:27:25.732 Oversized SGL: Not Supported 00:27:25.732 SGL Metadata Address: Not Supported 00:27:25.732 SGL Offset: Supported 00:27:25.732 Transport SGL Data Block: Not Supported 00:27:25.732 Replay Protected Memory Block: Not Supported 00:27:25.732 00:27:25.732 Firmware Slot Information 00:27:25.732 ========================= 00:27:25.732 Active slot: 0 00:27:25.732 00:27:25.732 00:27:25.732 Error Log 00:27:25.732 ========= 00:27:25.732 00:27:25.732 Active Namespaces 00:27:25.732 ================= 00:27:25.732 Discovery Log Page 00:27:25.732 ================== 00:27:25.732 Generation Counter: 2 00:27:25.732 Number of Records: 2 00:27:25.732 Record Format: 0 00:27:25.732 00:27:25.732 Discovery Log Entry 0 00:27:25.732 ---------------------- 00:27:25.732 Transport Type: 3 (TCP) 00:27:25.732 Address Family: 1 (IPv4) 00:27:25.732 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:25.732 Entry Flags: 00:27:25.732 Duplicate Returned Information: 0 00:27:25.732 Explicit Persistent Connection Support for Discovery: 0 00:27:25.732 Transport Requirements: 00:27:25.732 Secure Channel: Not Specified 00:27:25.732 Port ID: 1 (0x0001) 00:27:25.732 Controller ID: 65535 (0xffff) 00:27:25.732 Admin Max SQ Size: 32 00:27:25.732 Transport Service Identifier: 4420 00:27:25.732 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:25.732 Transport Address: 10.0.0.1 00:27:25.732 Discovery Log Entry 1 00:27:25.732 ---------------------- 00:27:25.732 Transport Type: 3 (TCP) 00:27:25.732 Address Family: 1 (IPv4) 00:27:25.732 Subsystem Type: 2 (NVM Subsystem) 00:27:25.732 Entry Flags: 00:27:25.732 Duplicate Returned Information: 0 00:27:25.732 Explicit Persistent Connection Support for Discovery: 0 00:27:25.732 Transport Requirements: 00:27:25.732 Secure Channel: Not Specified 00:27:25.732 Port ID: 1 (0x0001) 00:27:25.732 Controller ID: 65535 (0xffff) 00:27:25.732 Admin Max SQ Size: 32 00:27:25.732 Transport Service Identifier: 4420 00:27:25.732 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:25.732 Transport Address: 10.0.0.1 00:27:25.732 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:25.994 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.994 get_feature(0x01) failed 00:27:25.994 get_feature(0x02) failed 00:27:25.994 get_feature(0x04) failed 00:27:25.994 ===================================================== 00:27:25.994 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:25.994 ===================================================== 00:27:25.994 Controller Capabilities/Features 00:27:25.994 ================================ 00:27:25.994 Vendor ID: 0000 00:27:25.994 Subsystem Vendor ID: 0000 00:27:25.994 Serial Number: a9a39f5648c1c8c14092 00:27:25.994 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:25.994 Firmware Version: 6.7.0-68 00:27:25.994 Recommended Arb Burst: 6 00:27:25.994 IEEE OUI Identifier: 00 00 00 00:27:25.994 Multi-path I/O 00:27:25.994 May have multiple subsystem ports: Yes 00:27:25.994 May have multiple controllers: Yes 00:27:25.994 Associated with SR-IOV VF: No 00:27:25.994 Max Data Transfer Size: Unlimited 00:27:25.994 Max Number of Namespaces: 1024 00:27:25.994 Max Number of I/O Queues: 128 00:27:25.994 NVMe Specification Version (VS): 1.3 00:27:25.994 NVMe Specification Version (Identify): 1.3 00:27:25.994 Maximum Queue Entries: 1024 00:27:25.994 Contiguous Queues Required: No 00:27:25.994 Arbitration Mechanisms Supported 00:27:25.994 Weighted Round Robin: Not Supported 00:27:25.994 Vendor Specific: Not Supported 00:27:25.994 Reset Timeout: 7500 ms 00:27:25.994 Doorbell Stride: 4 bytes 00:27:25.994 NVM Subsystem Reset: Not Supported 00:27:25.994 Command Sets Supported 00:27:25.994 NVM Command Set: Supported 00:27:25.994 Boot Partition: Not Supported 00:27:25.994 Memory Page Size Minimum: 4096 bytes 00:27:25.994 Memory Page Size Maximum: 4096 bytes 00:27:25.994 Persistent Memory Region: Not Supported 00:27:25.994 Optional Asynchronous Events Supported 00:27:25.994 Namespace Attribute Notices: Supported 00:27:25.994 Firmware Activation Notices: Not Supported 00:27:25.994 ANA Change Notices: Supported 00:27:25.994 PLE Aggregate Log Change Notices: Not Supported 00:27:25.994 LBA Status Info Alert Notices: Not Supported 00:27:25.994 EGE Aggregate Log Change Notices: Not Supported 00:27:25.994 Normal NVM Subsystem Shutdown event: Not Supported 00:27:25.994 Zone Descriptor Change Notices: Not Supported 00:27:25.994 Discovery Log Change Notices: Not Supported 00:27:25.994 Controller Attributes 00:27:25.994 128-bit Host Identifier: Supported 00:27:25.994 Non-Operational Permissive Mode: Not Supported 00:27:25.994 NVM Sets: Not Supported 00:27:25.994 Read Recovery Levels: Not Supported 00:27:25.994 Endurance Groups: Not Supported 00:27:25.994 Predictable Latency Mode: Not Supported 00:27:25.994 Traffic Based Keep ALive: Supported 00:27:25.994 Namespace Granularity: Not Supported 00:27:25.994 SQ Associations: Not Supported 00:27:25.994 UUID List: Not Supported 00:27:25.994 Multi-Domain Subsystem: Not Supported 00:27:25.994 Fixed Capacity Management: Not Supported 00:27:25.994 Variable Capacity Management: Not Supported 00:27:25.994 Delete Endurance Group: Not Supported 00:27:25.994 Delete NVM Set: Not Supported 00:27:25.994 Extended LBA Formats Supported: Not Supported 00:27:25.994 Flexible Data Placement Supported: Not Supported 00:27:25.994 00:27:25.994 Controller Memory Buffer Support 00:27:25.994 ================================ 00:27:25.994 Supported: No 00:27:25.994 00:27:25.994 Persistent Memory Region Support 00:27:25.994 ================================ 00:27:25.994 Supported: No 00:27:25.994 00:27:25.994 Admin Command Set Attributes 00:27:25.995 ============================ 00:27:25.995 Security Send/Receive: Not Supported 00:27:25.995 Format NVM: Not Supported 00:27:25.995 Firmware Activate/Download: Not Supported 00:27:25.995 Namespace Management: Not Supported 00:27:25.995 Device Self-Test: Not Supported 00:27:25.995 Directives: Not Supported 00:27:25.995 NVMe-MI: Not Supported 00:27:25.995 Virtualization Management: Not Supported 00:27:25.995 Doorbell Buffer Config: Not Supported 00:27:25.995 Get LBA Status Capability: Not Supported 00:27:25.995 Command & Feature Lockdown Capability: Not Supported 00:27:25.995 Abort Command Limit: 4 00:27:25.995 Async Event Request Limit: 4 00:27:25.995 Number of Firmware Slots: N/A 00:27:25.995 Firmware Slot 1 Read-Only: N/A 00:27:25.995 Firmware Activation Without Reset: N/A 00:27:25.995 Multiple Update Detection Support: N/A 00:27:25.995 Firmware Update Granularity: No Information Provided 00:27:25.995 Per-Namespace SMART Log: Yes 00:27:25.995 Asymmetric Namespace Access Log Page: Supported 00:27:25.995 ANA Transition Time : 10 sec 00:27:25.995 00:27:25.995 Asymmetric Namespace Access Capabilities 00:27:25.995 ANA Optimized State : Supported 00:27:25.995 ANA Non-Optimized State : Supported 00:27:25.995 ANA Inaccessible State : Supported 00:27:25.995 ANA Persistent Loss State : Supported 00:27:25.995 ANA Change State : Supported 00:27:25.995 ANAGRPID is not changed : No 00:27:25.995 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:25.995 00:27:25.995 ANA Group Identifier Maximum : 128 00:27:25.995 Number of ANA Group Identifiers : 128 00:27:25.995 Max Number of Allowed Namespaces : 1024 00:27:25.995 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:25.995 Command Effects Log Page: Supported 00:27:25.995 Get Log Page Extended Data: Supported 00:27:25.995 Telemetry Log Pages: Not Supported 00:27:25.995 Persistent Event Log Pages: Not Supported 00:27:25.995 Supported Log Pages Log Page: May Support 00:27:25.995 Commands Supported & Effects Log Page: Not Supported 00:27:25.995 Feature Identifiers & Effects Log Page:May Support 00:27:25.995 NVMe-MI Commands & Effects Log Page: May Support 00:27:25.995 Data Area 4 for Telemetry Log: Not Supported 00:27:25.995 Error Log Page Entries Supported: 128 00:27:25.995 Keep Alive: Supported 00:27:25.995 Keep Alive Granularity: 1000 ms 00:27:25.995 00:27:25.995 NVM Command Set Attributes 00:27:25.995 ========================== 00:27:25.995 Submission Queue Entry Size 00:27:25.995 Max: 64 00:27:25.995 Min: 64 00:27:25.995 Completion Queue Entry Size 00:27:25.995 Max: 16 00:27:25.995 Min: 16 00:27:25.995 Number of Namespaces: 1024 00:27:25.995 Compare Command: Not Supported 00:27:25.995 Write Uncorrectable Command: Not Supported 00:27:25.995 Dataset Management Command: Supported 00:27:25.995 Write Zeroes Command: Supported 00:27:25.995 Set Features Save Field: Not Supported 00:27:25.995 Reservations: Not Supported 00:27:25.995 Timestamp: Not Supported 00:27:25.995 Copy: Not Supported 00:27:25.995 Volatile Write Cache: Present 00:27:25.995 Atomic Write Unit (Normal): 1 00:27:25.995 Atomic Write Unit (PFail): 1 00:27:25.995 Atomic Compare & Write Unit: 1 00:27:25.995 Fused Compare & Write: Not Supported 00:27:25.995 Scatter-Gather List 00:27:25.995 SGL Command Set: Supported 00:27:25.995 SGL Keyed: Not Supported 00:27:25.995 SGL Bit Bucket Descriptor: Not Supported 00:27:25.995 SGL Metadata Pointer: Not Supported 00:27:25.995 Oversized SGL: Not Supported 00:27:25.995 SGL Metadata Address: Not Supported 00:27:25.995 SGL Offset: Supported 00:27:25.995 Transport SGL Data Block: Not Supported 00:27:25.995 Replay Protected Memory Block: Not Supported 00:27:25.995 00:27:25.995 Firmware Slot Information 00:27:25.995 ========================= 00:27:25.995 Active slot: 0 00:27:25.995 00:27:25.995 Asymmetric Namespace Access 00:27:25.995 =========================== 00:27:25.995 Change Count : 0 00:27:25.995 Number of ANA Group Descriptors : 1 00:27:25.995 ANA Group Descriptor : 0 00:27:25.995 ANA Group ID : 1 00:27:25.995 Number of NSID Values : 1 00:27:25.995 Change Count : 0 00:27:25.995 ANA State : 1 00:27:25.995 Namespace Identifier : 1 00:27:25.995 00:27:25.995 Commands Supported and Effects 00:27:25.995 ============================== 00:27:25.995 Admin Commands 00:27:25.995 -------------- 00:27:25.995 Get Log Page (02h): Supported 00:27:25.995 Identify (06h): Supported 00:27:25.995 Abort (08h): Supported 00:27:25.995 Set Features (09h): Supported 00:27:25.995 Get Features (0Ah): Supported 00:27:25.995 Asynchronous Event Request (0Ch): Supported 00:27:25.995 Keep Alive (18h): Supported 00:27:25.995 I/O Commands 00:27:25.995 ------------ 00:27:25.995 Flush (00h): Supported 00:27:25.995 Write (01h): Supported LBA-Change 00:27:25.995 Read (02h): Supported 00:27:25.995 Write Zeroes (08h): Supported LBA-Change 00:27:25.995 Dataset Management (09h): Supported 00:27:25.995 00:27:25.995 Error Log 00:27:25.995 ========= 00:27:25.995 Entry: 0 00:27:25.995 Error Count: 0x3 00:27:25.995 Submission Queue Id: 0x0 00:27:25.995 Command Id: 0x5 00:27:25.995 Phase Bit: 0 00:27:25.995 Status Code: 0x2 00:27:25.995 Status Code Type: 0x0 00:27:25.995 Do Not Retry: 1 00:27:25.995 Error Location: 0x28 00:27:25.995 LBA: 0x0 00:27:25.995 Namespace: 0x0 00:27:25.995 Vendor Log Page: 0x0 00:27:25.995 ----------- 00:27:25.995 Entry: 1 00:27:25.995 Error Count: 0x2 00:27:25.995 Submission Queue Id: 0x0 00:27:25.995 Command Id: 0x5 00:27:25.995 Phase Bit: 0 00:27:25.995 Status Code: 0x2 00:27:25.995 Status Code Type: 0x0 00:27:25.995 Do Not Retry: 1 00:27:25.995 Error Location: 0x28 00:27:25.995 LBA: 0x0 00:27:25.995 Namespace: 0x0 00:27:25.995 Vendor Log Page: 0x0 00:27:25.995 ----------- 00:27:25.995 Entry: 2 00:27:25.995 Error Count: 0x1 00:27:25.995 Submission Queue Id: 0x0 00:27:25.995 Command Id: 0x4 00:27:25.995 Phase Bit: 0 00:27:25.995 Status Code: 0x2 00:27:25.995 Status Code Type: 0x0 00:27:25.995 Do Not Retry: 1 00:27:25.995 Error Location: 0x28 00:27:25.995 LBA: 0x0 00:27:25.995 Namespace: 0x0 00:27:25.995 Vendor Log Page: 0x0 00:27:25.995 00:27:25.995 Number of Queues 00:27:25.995 ================ 00:27:25.995 Number of I/O Submission Queues: 128 00:27:25.995 Number of I/O Completion Queues: 128 00:27:25.995 00:27:25.995 ZNS Specific Controller Data 00:27:25.995 ============================ 00:27:25.995 Zone Append Size Limit: 0 00:27:25.995 00:27:25.995 00:27:25.995 Active Namespaces 00:27:25.995 ================= 00:27:25.995 get_feature(0x05) failed 00:27:25.995 Namespace ID:1 00:27:25.995 Command Set Identifier: NVM (00h) 00:27:25.995 Deallocate: Supported 00:27:25.995 Deallocated/Unwritten Error: Not Supported 00:27:25.995 Deallocated Read Value: Unknown 00:27:25.995 Deallocate in Write Zeroes: Not Supported 00:27:25.995 Deallocated Guard Field: 0xFFFF 00:27:25.995 Flush: Supported 00:27:25.995 Reservation: Not Supported 00:27:25.995 Namespace Sharing Capabilities: Multiple Controllers 00:27:25.995 Size (in LBAs): 3750748848 (1788GiB) 00:27:25.995 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:25.995 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:25.995 UUID: d8cd2226-9946-41c0-bd19-50f8f5bee45d 00:27:25.995 Thin Provisioning: Not Supported 00:27:25.995 Per-NS Atomic Units: Yes 00:27:25.995 Atomic Write Unit (Normal): 8 00:27:25.995 Atomic Write Unit (PFail): 8 00:27:25.995 Preferred Write Granularity: 8 00:27:25.995 Atomic Compare & Write Unit: 8 00:27:25.995 Atomic Boundary Size (Normal): 0 00:27:25.995 Atomic Boundary Size (PFail): 0 00:27:25.995 Atomic Boundary Offset: 0 00:27:25.995 NGUID/EUI64 Never Reused: No 00:27:25.995 ANA group ID: 1 00:27:25.995 Namespace Write Protected: No 00:27:25.995 Number of LBA Formats: 1 00:27:25.995 Current LBA Format: LBA Format #00 00:27:25.996 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:25.996 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.996 rmmod nvme_tcp 00:27:25.996 rmmod nvme_fabrics 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.996 10:16:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.913 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.176 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:28.176 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:28.176 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:28.176 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.176 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:28.176 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:28.177 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:28.177 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:28.177 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:28.177 10:16:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:31.484 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.484 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:31.746 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:32.007 00:27:32.007 real 0m18.186s 00:27:32.007 user 0m4.797s 00:27:32.007 sys 0m10.355s 00:27:32.007 10:16:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.007 10:16:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:32.007 ************************************ 00:27:32.007 END TEST nvmf_identify_kernel_target 00:27:32.007 ************************************ 00:27:32.007 10:16:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.007 10:16:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:32.007 10:16:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:32.007 10:16:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.269 ************************************ 00:27:32.269 START TEST nvmf_auth_host 00:27:32.269 ************************************ 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:32.269 * Looking for test storage... 00:27:32.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:32.269 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.858 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:38.859 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:38.859 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:38.859 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:38.859 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.859 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:39.120 10:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:39.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:39.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:27:39.120 00:27:39.120 --- 10.0.0.2 ping statistics --- 00:27:39.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.120 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:39.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:39.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:27:39.120 00:27:39.120 --- 10.0.0.1 ping statistics --- 00:27:39.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:39.120 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1445212 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1445212 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1445212 ']' 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:39.120 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3c3477d457a95ada7c32b077f9a7ec68 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.SQn 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3c3477d457a95ada7c32b077f9a7ec68 0 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3c3477d457a95ada7c32b077f9a7ec68 0 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3c3477d457a95ada7c32b077f9a7ec68 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:40.061 10:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.SQn 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.SQn 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.SQn 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2c7259549278991ab38833cc022c62cefc98cbe2beb16fc5d5e71fac48d59ed9 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8Rk 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2c7259549278991ab38833cc022c62cefc98cbe2beb16fc5d5e71fac48d59ed9 3 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2c7259549278991ab38833cc022c62cefc98cbe2beb16fc5d5e71fac48d59ed9 3 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2c7259549278991ab38833cc022c62cefc98cbe2beb16fc5d5e71fac48d59ed9 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8Rk 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8Rk 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.8Rk 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f2f0d3447b06e11fb8be0b2eeb284170ddb4cec28a2fa4b 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.wAA 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f2f0d3447b06e11fb8be0b2eeb284170ddb4cec28a2fa4b 0 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f2f0d3447b06e11fb8be0b2eeb284170ddb4cec28a2fa4b 0 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f2f0d3447b06e11fb8be0b2eeb284170ddb4cec28a2fa4b 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.wAA 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.wAA 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.wAA 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4760512f3659a2f1bc96843d74fa7171adafc2ec31049dc4 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.drU 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4760512f3659a2f1bc96843d74fa7171adafc2ec31049dc4 2 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4760512f3659a2f1bc96843d74fa7171adafc2ec31049dc4 2 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4760512f3659a2f1bc96843d74fa7171adafc2ec31049dc4 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:40.061 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.drU 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.drU 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.drU 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0a8e4b150931d9ed7d4f6917adc34eb0 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.gXA 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0a8e4b150931d9ed7d4f6917adc34eb0 1 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0a8e4b150931d9ed7d4f6917adc34eb0 1 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0a8e4b150931d9ed7d4f6917adc34eb0 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.gXA 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.gXA 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.gXA 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b9b190d8aab7ea1da43c73b18666bfbe 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:40.323 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.bVq 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b9b190d8aab7ea1da43c73b18666bfbe 1 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b9b190d8aab7ea1da43c73b18666bfbe 1 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b9b190d8aab7ea1da43c73b18666bfbe 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.bVq 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.bVq 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.bVq 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=29f18c5c7586eba27c7313ccd1e6a435377803b5f8d0e31b 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.qnN 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 29f18c5c7586eba27c7313ccd1e6a435377803b5f8d0e31b 2 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 29f18c5c7586eba27c7313ccd1e6a435377803b5f8d0e31b 2 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=29f18c5c7586eba27c7313ccd1e6a435377803b5f8d0e31b 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.qnN 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.qnN 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qnN 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f27213645ffa8fd69405ba0b8e1379d3 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ENP 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f27213645ffa8fd69405ba0b8e1379d3 0 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f27213645ffa8fd69405ba0b8e1379d3 0 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f27213645ffa8fd69405ba0b8e1379d3 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ENP 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ENP 00:27:40.324 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ENP 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cb6146c664d8fab8b75e62c501cee500671768f0f901e83bfef7b2bda6403405 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.GFl 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cb6146c664d8fab8b75e62c501cee500671768f0f901e83bfef7b2bda6403405 3 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cb6146c664d8fab8b75e62c501cee500671768f0f901e83bfef7b2bda6403405 3 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cb6146c664d8fab8b75e62c501cee500671768f0f901e83bfef7b2bda6403405 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.GFl 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.GFl 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GFl 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1445212 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 1445212 ']' 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.SQn 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.8Rk ]] 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Rk 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.wAA 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.586 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.847 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.847 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.drU ]] 00:27:40.847 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.drU 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.gXA 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.bVq ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.bVq 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qnN 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ENP ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ENP 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GFl 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:40.848 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:44.149 Waiting for block devices as requested 00:27:44.149 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:44.149 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:44.149 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:44.149 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:44.408 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:44.408 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:44.408 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:44.668 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:44.668 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:44.928 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:44.928 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:44.928 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:45.226 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:45.226 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:45.226 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:45.496 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:45.496 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:46.438 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:46.438 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:46.438 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:46.438 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:46.438 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:46.438 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:46.439 No valid GPT data, bailing 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:46.439 00:27:46.439 Discovery Log Number of Records 2, Generation counter 2 00:27:46.439 =====Discovery Log Entry 0====== 00:27:46.439 trtype: tcp 00:27:46.439 adrfam: ipv4 00:27:46.439 subtype: current discovery subsystem 00:27:46.439 treq: not specified, sq flow control disable supported 00:27:46.439 portid: 1 00:27:46.439 trsvcid: 4420 00:27:46.439 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:46.439 traddr: 10.0.0.1 00:27:46.439 eflags: none 00:27:46.439 sectype: none 00:27:46.439 =====Discovery Log Entry 1====== 00:27:46.439 trtype: tcp 00:27:46.439 adrfam: ipv4 00:27:46.439 subtype: nvme subsystem 00:27:46.439 treq: not specified, sq flow control disable supported 00:27:46.439 portid: 1 00:27:46.439 trsvcid: 4420 00:27:46.439 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:46.439 traddr: 10.0.0.1 00:27:46.439 eflags: none 00:27:46.439 sectype: none 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.439 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.700 nvme0n1 00:27:46.700 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.701 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.962 nvme0n1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.962 10:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.223 nvme0n1 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.223 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.224 nvme0n1 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.224 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.485 nvme0n1 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.485 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 nvme0n1 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.747 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.009 10:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.009 nvme0n1 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.009 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.270 nvme0n1 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.270 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.531 nvme0n1 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.531 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.532 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.532 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.532 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.792 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.793 nvme0n1 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.793 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.054 10:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.054 nvme0n1 00:27:49.054 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.054 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.054 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.054 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.054 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.054 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.316 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.577 nvme0n1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.577 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.837 nvme0n1 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.838 10:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.098 nvme0n1 00:27:50.098 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.098 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.098 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.098 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.098 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.098 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.358 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.359 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.620 nvme0n1 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.620 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.882 nvme0n1 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.882 10:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.882 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.454 nvme0n1 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.454 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.455 10:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 nvme0n1 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.026 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 nvme0n1 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:52.597 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.598 10:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.168 nvme0n1 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.168 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.169 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.742 nvme0n1 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.742 10:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.684 nvme0n1 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.684 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.685 10:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 nvme0n1 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.257 10:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.200 nvme0n1 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.200 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.142 nvme0n1 00:27:57.142 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.142 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.142 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.142 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.142 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.142 10:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.142 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.143 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.714 nvme0n1 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.714 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.975 10:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.975 nvme0n1 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.976 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.237 nvme0n1 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.237 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.238 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.499 nvme0n1 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.499 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.759 nvme0n1 00:27:58.759 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.759 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.759 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.759 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.759 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.759 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.760 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.020 nvme0n1 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.020 10:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.281 nvme0n1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.281 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.546 nvme0n1 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.546 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.547 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.547 nvme0n1 00:27:59.547 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.547 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.830 nvme0n1 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.830 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.090 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.090 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.091 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.091 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.091 10:16:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.091 nvme0n1 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.091 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:00.352 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.353 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.613 nvme0n1 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.613 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.614 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.876 nvme0n1 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.876 10:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.876 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.136 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.397 nvme0n1 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:01.397 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.398 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.659 nvme0n1 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.659 10:16:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.920 nvme0n1 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.920 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.180 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.181 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.441 nvme0n1 00:28:02.441 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.441 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.441 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.441 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.441 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.441 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.702 10:16:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.273 nvme0n1 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.273 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.274 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.534 nvme0n1 00:28:03.535 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.535 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.535 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.535 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.535 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.535 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.795 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.796 10:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.367 nvme0n1 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.367 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.368 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.629 nvme0n1 00:28:04.629 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.629 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.629 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.629 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.629 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.629 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.889 10:16:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.462 nvme0n1 00:28:05.462 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.462 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.462 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.462 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.462 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.462 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.722 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:05.723 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.723 10:16:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.294 nvme0n1 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.294 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.295 10:16:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.238 nvme0n1 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:07.238 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.239 10:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.181 nvme0n1 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.181 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.752 nvme0n1 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.752 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:09.013 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.014 10:16:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.014 nvme0n1 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.014 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.275 nvme0n1 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.275 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.276 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.537 nvme0n1 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.537 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.538 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.538 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.538 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.538 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.538 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.538 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.798 nvme0n1 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:09.798 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.799 10:16:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.059 nvme0n1 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:10.059 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.060 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.321 nvme0n1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:10.321 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.322 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.583 nvme0n1 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.583 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.844 nvme0n1 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.844 10:16:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.105 nvme0n1 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.105 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.366 nvme0n1 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.366 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.367 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.628 nvme0n1 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:11.628 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:11.629 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.629 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.629 10:16:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.889 nvme0n1 00:28:11.889 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.889 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.889 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.889 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.889 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.889 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.150 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.410 nvme0n1 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.410 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.411 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.671 nvme0n1 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.671 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.934 10:16:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.196 nvme0n1 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.196 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.197 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 nvme0n1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.782 10:16:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.076 nvme0n1 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.076 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.337 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 nvme0n1 00:28:14.598 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.598 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.598 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.598 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.598 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.598 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.859 10:16:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.119 nvme0n1 00:28:15.119 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.119 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.119 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.119 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.119 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.119 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.380 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.640 nvme0n1 00:28:15.640 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.640 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.640 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.640 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.640 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:M2MzNDc3ZDQ1N2E5NWFkYTdjMzJiMDc3ZjlhN2VjNjiVB3HQ: 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: ]] 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmM3MjU5NTQ5Mjc4OTkxYWIzODgzM2NjMDIyYzYyY2VmYzk4Y2JlMmJlYjE2ZmM1ZDVlNzFmYWM0OGQ1OWVkOVWVws4=: 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:15.900 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.901 10:16:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.472 nvme0n1 00:28:16.472 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.472 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.472 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.472 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.472 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.733 10:16:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.304 nvme0n1 00:28:17.304 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.304 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.304 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.304 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.304 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.304 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MGE4ZTRiMTUwOTMxZDllZDdkNGY2OTE3YWRjMzRlYjDN8i2k: 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YjliMTkwZDhhYWI3ZWExZGE0M2M3M2IxODY2NmJmYmU/9ypM: 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.565 10:16:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.137 nvme0n1 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.137 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjlmMThjNWM3NTg2ZWJhMjdjNzMxM2NjZDFlNmE0MzUzNzc4MDNiNWY4ZDBlMzFiEn78Vg==: 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: ]] 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZjI3MjEzNjQ1ZmZhOGZkNjk0MDViYTBiOGUxMzc5ZDPPq8hL: 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.398 10:16:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.970 nvme0n1 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.970 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Y2I2MTQ2YzY2NGQ4ZmFiOGI3NWU2MmM1MDFjZWU1MDA2NzE3NjhmMGY5MDFlODNiZmVmN2IyYmRhNjQwMzQwNeSpb8Y=: 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.231 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.232 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.232 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.232 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.803 nvme0n1 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.803 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NWYyZjBkMzQ0N2IwNmUxMWZiOGJlMGIyZWViMjg0MTcwZGRiNGNlYzI4YTJmYTRiaUg5cA==: 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: ]] 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDc2MDUxMmYzNjU5YTJmMWJjOTY4NDNkNzRmYTcxNzFhZGFmYzJlYzMxMDQ5ZGM03sD1Bw==: 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.064 10:16:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 request: 00:28:20.064 { 00:28:20.064 "name": "nvme0", 00:28:20.064 "trtype": "tcp", 00:28:20.064 "traddr": "10.0.0.1", 00:28:20.064 "adrfam": "ipv4", 00:28:20.064 "trsvcid": "4420", 00:28:20.064 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:20.064 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:20.064 "prchk_reftag": false, 00:28:20.064 "prchk_guard": false, 00:28:20.064 "hdgst": false, 00:28:20.064 "ddgst": false, 00:28:20.064 "method": "bdev_nvme_attach_controller", 00:28:20.064 "req_id": 1 00:28:20.064 } 00:28:20.064 Got JSON-RPC error response 00:28:20.064 response: 00:28:20.064 { 00:28:20.064 "code": -5, 00:28:20.064 "message": "Input/output error" 00:28:20.064 } 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.064 request: 00:28:20.064 { 00:28:20.064 "name": "nvme0", 00:28:20.064 "trtype": "tcp", 00:28:20.064 "traddr": "10.0.0.1", 00:28:20.064 "adrfam": "ipv4", 00:28:20.064 "trsvcid": "4420", 00:28:20.064 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:20.064 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:20.064 "prchk_reftag": false, 00:28:20.064 "prchk_guard": false, 00:28:20.064 "hdgst": false, 00:28:20.064 "ddgst": false, 00:28:20.064 "dhchap_key": "key2", 00:28:20.064 "method": "bdev_nvme_attach_controller", 00:28:20.064 "req_id": 1 00:28:20.064 } 00:28:20.064 Got JSON-RPC error response 00:28:20.064 response: 00:28:20.064 { 00:28:20.064 "code": -5, 00:28:20.064 "message": "Input/output error" 00:28:20.064 } 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:20.064 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:20.065 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.065 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:20.065 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.065 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.065 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.325 request: 00:28:20.325 { 00:28:20.325 "name": "nvme0", 00:28:20.325 "trtype": "tcp", 00:28:20.325 "traddr": "10.0.0.1", 00:28:20.325 "adrfam": "ipv4", 00:28:20.325 "trsvcid": "4420", 00:28:20.325 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:20.325 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:20.325 "prchk_reftag": false, 00:28:20.325 "prchk_guard": false, 00:28:20.325 "hdgst": false, 00:28:20.325 "ddgst": false, 00:28:20.325 "dhchap_key": "key1", 00:28:20.325 "dhchap_ctrlr_key": "ckey2", 00:28:20.325 "method": "bdev_nvme_attach_controller", 00:28:20.325 "req_id": 1 00:28:20.325 } 00:28:20.325 Got JSON-RPC error response 00:28:20.325 response: 00:28:20.325 { 00:28:20.325 "code": -5, 00:28:20.325 "message": "Input/output error" 00:28:20.325 } 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:20.325 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:20.326 rmmod nvme_tcp 00:28:20.326 rmmod nvme_fabrics 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1445212 ']' 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1445212 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 1445212 ']' 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 1445212 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1445212 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1445212' 00:28:20.326 killing process with pid 1445212 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 1445212 00:28:20.326 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 1445212 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.588 10:16:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:22.505 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:22.766 10:17:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:26.070 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:26.070 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:26.331 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:26.593 10:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.SQn /tmp/spdk.key-null.wAA /tmp/spdk.key-sha256.gXA /tmp/spdk.key-sha384.qnN /tmp/spdk.key-sha512.GFl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:26.593 10:17:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:29.902 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:29.902 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:29.902 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:30.475 00:28:30.475 real 0m58.141s 00:28:30.475 user 0m51.638s 00:28:30.475 sys 0m14.958s 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.475 ************************************ 00:28:30.475 END TEST nvmf_auth_host 00:28:30.475 ************************************ 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.475 ************************************ 00:28:30.475 START TEST nvmf_digest 00:28:30.475 ************************************ 00:28:30.475 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.476 * Looking for test storage... 00:28:30.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:30.476 10:17:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.118 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:37.118 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:37.118 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:37.118 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:37.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:37.119 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:37.119 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:37.119 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:37.119 10:17:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:37.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:37.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:28:37.119 00:28:37.119 --- 10.0.0.2 ping statistics --- 00:28:37.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.119 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:37.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:37.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.396 ms 00:28:37.119 00:28:37.119 --- 10.0.0.1 ping statistics --- 00:28:37.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:37.119 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:37.119 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:37.120 ************************************ 00:28:37.120 START TEST nvmf_digest_clean 00:28:37.120 ************************************ 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1461723 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1461723 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1461723 ']' 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.120 10:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:37.380 [2024-07-25 10:17:16.256235] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:37.380 [2024-07-25 10:17:16.256296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.380 EAL: No free 2048 kB hugepages reported on node 1 00:28:37.380 [2024-07-25 10:17:16.326754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.380 [2024-07-25 10:17:16.399931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:37.380 [2024-07-25 10:17:16.399973] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:37.380 [2024-07-25 10:17:16.399982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:37.380 [2024-07-25 10:17:16.399988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:37.380 [2024-07-25 10:17:16.399994] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:37.380 [2024-07-25 10:17:16.400018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.952 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.213 null0 00:28:38.213 [2024-07-25 10:17:17.130592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.213 [2024-07-25 10:17:17.154759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1461887 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1461887 /var/tmp/bperf.sock 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1461887 ']' 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:38.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:38.213 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:38.213 [2024-07-25 10:17:17.207837] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:38.213 [2024-07-25 10:17:17.207884] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1461887 ] 00:28:38.213 EAL: No free 2048 kB hugepages reported on node 1 00:28:38.213 [2024-07-25 10:17:17.281456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.213 [2024-07-25 10:17:17.345437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.157 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:39.157 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:39.157 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:39.157 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:39.157 10:17:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:39.157 10:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.157 10:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:39.417 nvme0n1 00:28:39.417 10:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:39.417 10:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:39.417 Running I/O for 2 seconds... 00:28:41.962 00:28:41.962 Latency(us) 00:28:41.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.962 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:41.962 nvme0n1 : 2.00 20821.49 81.33 0.00 0.00 6140.39 3167.57 15182.51 00:28:41.962 =================================================================================================================== 00:28:41.962 Total : 20821.49 81.33 0.00 0.00 6140.39 3167.57 15182.51 00:28:41.962 0 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:41.962 | select(.opcode=="crc32c") 00:28:41.962 | "\(.module_name) \(.executed)"' 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1461887 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1461887 ']' 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1461887 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461887 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461887' 00:28:41.962 killing process with pid 1461887 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1461887 00:28:41.962 Received shutdown signal, test time was about 2.000000 seconds 00:28:41.962 00:28:41.962 Latency(us) 00:28:41.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.962 =================================================================================================================== 00:28:41.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1461887 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:41.962 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1462576 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1462576 /var/tmp/bperf.sock 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1462576 ']' 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:41.963 10:17:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:41.963 [2024-07-25 10:17:20.971905] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:41.963 [2024-07-25 10:17:20.971962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462576 ] 00:28:41.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:41.963 Zero copy mechanism will not be used. 00:28:41.963 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.963 [2024-07-25 10:17:21.049116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.223 [2024-07-25 10:17:21.112216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.794 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.794 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:42.794 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:42.794 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:42.794 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:43.055 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.055 10:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.315 nvme0n1 00:28:43.315 10:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:43.315 10:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.315 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.315 Zero copy mechanism will not be used. 00:28:43.315 Running I/O for 2 seconds... 00:28:45.861 00:28:45.861 Latency(us) 00:28:45.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.861 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:45.861 nvme0n1 : 2.01 1984.90 248.11 0.00 0.00 8056.55 3249.49 20206.93 00:28:45.861 =================================================================================================================== 00:28:45.861 Total : 1984.90 248.11 0.00 0.00 8056.55 3249.49 20206.93 00:28:45.861 0 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.861 | select(.opcode=="crc32c") 00:28:45.861 | "\(.module_name) \(.executed)"' 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1462576 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1462576 ']' 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1462576 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1462576 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1462576' 00:28:45.861 killing process with pid 1462576 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1462576 00:28:45.861 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.861 00:28:45.861 Latency(us) 00:28:45.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.861 =================================================================================================================== 00:28:45.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1462576 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1463308 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1463308 /var/tmp/bperf.sock 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1463308 ']' 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:45.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:45.861 10:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:45.862 [2024-07-25 10:17:24.760371] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:45.862 [2024-07-25 10:17:24.760428] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463308 ] 00:28:45.862 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.862 [2024-07-25 10:17:24.835244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.862 [2024-07-25 10:17:24.888698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:46.434 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:46.434 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:46.434 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:46.434 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:46.434 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:46.695 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.695 10:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:46.956 nvme0n1 00:28:46.956 10:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:46.956 10:17:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:47.217 Running I/O for 2 seconds... 00:28:49.133 00:28:49.133 Latency(us) 00:28:49.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.133 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:49.133 nvme0n1 : 2.01 21256.13 83.03 0.00 0.00 6010.30 4833.28 22173.01 00:28:49.133 =================================================================================================================== 00:28:49.133 Total : 21256.13 83.03 0.00 0.00 6010.30 4833.28 22173.01 00:28:49.133 0 00:28:49.133 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.133 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:49.133 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.133 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.133 | select(.opcode=="crc32c") 00:28:49.133 | "\(.module_name) \(.executed)"' 00:28:49.133 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1463308 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1463308 ']' 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1463308 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1463308 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1463308' 00:28:49.395 killing process with pid 1463308 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1463308 00:28:49.395 Received shutdown signal, test time was about 2.000000 seconds 00:28:49.395 00:28:49.395 Latency(us) 00:28:49.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.395 =================================================================================================================== 00:28:49.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1463308 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1464131 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1464131 /var/tmp/bperf.sock 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 1464131 ']' 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:49.395 10:17:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.656 [2024-07-25 10:17:28.545281] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:49.656 [2024-07-25 10:17:28.545337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1464131 ] 00:28:49.656 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:49.656 Zero copy mechanism will not be used. 00:28:49.656 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.656 [2024-07-25 10:17:28.620340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.656 [2024-07-25 10:17:28.673511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.228 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:50.228 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:28:50.228 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:50.228 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:50.228 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:50.488 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.488 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.060 nvme0n1 00:28:51.060 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:51.060 10:17:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:51.060 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:51.060 Zero copy mechanism will not be used. 00:28:51.060 Running I/O for 2 seconds... 00:28:53.022 00:28:53.022 Latency(us) 00:28:53.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.022 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:53.022 nvme0n1 : 2.01 2349.57 293.70 0.00 0.00 6796.06 5215.57 29054.29 00:28:53.022 =================================================================================================================== 00:28:53.022 Total : 2349.57 293.70 0.00 0.00 6796.06 5215.57 29054.29 00:28:53.022 0 00:28:53.022 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:53.022 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:53.022 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.022 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.022 | select(.opcode=="crc32c") 00:28:53.022 | "\(.module_name) \(.executed)"' 00:28:53.022 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:53.283 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:53.283 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:53.283 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:53.283 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:53.283 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1464131 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1464131 ']' 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1464131 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464131 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464131' 00:28:53.284 killing process with pid 1464131 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1464131 00:28:53.284 Received shutdown signal, test time was about 2.000000 seconds 00:28:53.284 00:28:53.284 Latency(us) 00:28:53.284 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.284 =================================================================================================================== 00:28:53.284 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1464131 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1461723 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 1461723 ']' 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 1461723 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1461723 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1461723' 00:28:53.545 killing process with pid 1461723 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 1461723 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 1461723 00:28:53.545 00:28:53.545 real 0m16.356s 00:28:53.545 user 0m32.273s 00:28:53.545 sys 0m3.177s 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:53.545 ************************************ 00:28:53.545 END TEST nvmf_digest_clean 00:28:53.545 ************************************ 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:53.545 ************************************ 00:28:53.545 START TEST nvmf_digest_error 00:28:53.545 ************************************ 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1464977 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1464977 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1464977 ']' 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:53.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:53.545 10:17:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:53.807 [2024-07-25 10:17:32.686894] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:53.807 [2024-07-25 10:17:32.686947] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:53.807 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.807 [2024-07-25 10:17:32.754635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:53.807 [2024-07-25 10:17:32.827580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:53.807 [2024-07-25 10:17:32.827620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:53.807 [2024-07-25 10:17:32.827627] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:53.807 [2024-07-25 10:17:32.827634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:53.807 [2024-07-25 10:17:32.827640] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:53.807 [2024-07-25 10:17:32.827658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.395 [2024-07-25 10:17:33.497586] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:54.395 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.666 null0 00:28:54.666 [2024-07-25 10:17:33.574254] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:54.666 [2024-07-25 10:17:33.598429] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1465141 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1465141 /var/tmp/bperf.sock 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1465141 ']' 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:54.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:54.666 10:17:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:54.666 [2024-07-25 10:17:33.651079] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:54.666 [2024-07-25 10:17:33.651128] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465141 ] 00:28:54.666 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.666 [2024-07-25 10:17:33.724024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.666 [2024-07-25 10:17:33.777882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.609 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.871 nvme0n1 00:28:55.871 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:55.871 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:55.871 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:55.871 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:55.871 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:55.871 10:17:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.871 Running I/O for 2 seconds... 00:28:55.871 [2024-07-25 10:17:34.959082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:55.871 [2024-07-25 10:17:34.959112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.871 [2024-07-25 10:17:34.959120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.871 [2024-07-25 10:17:34.971431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:55.871 [2024-07-25 10:17:34.971450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.871 [2024-07-25 10:17:34.971457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.871 [2024-07-25 10:17:34.984357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:55.871 [2024-07-25 10:17:34.984377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.871 [2024-07-25 10:17:34.984383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.871 [2024-07-25 10:17:34.996009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:55.871 [2024-07-25 10:17:34.996028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.871 [2024-07-25 10:17:34.996035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.008365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.008384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:24053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.008391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.020276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.020295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.020302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.032687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.032706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.032712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.046263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.046282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.046288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.057827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.057846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.057852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.070434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.070453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.070459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.081325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.081343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.081349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.093070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.093089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.093095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.105636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.105654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.105660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.120678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.120696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.120703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.132071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.132094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.132101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.143996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.144020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.156077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.156095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.156102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.168543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.168562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.168568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.180549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.180568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.180574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.192399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.192417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.192423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.204968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.204986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.204993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.217042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.217061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.217067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.230340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.230359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:86 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.230366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.241950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.241968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.241974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.133 [2024-07-25 10:17:35.255697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.133 [2024-07-25 10:17:35.255714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.133 [2024-07-25 10:17:35.255721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.395 [2024-07-25 10:17:35.267204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.395 [2024-07-25 10:17:35.267222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.395 [2024-07-25 10:17:35.267230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.280071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.280089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.280095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.291222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.291240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:23161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.291247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.304242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.304260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.304267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.315227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.315245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.315252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.328718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.328736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.328743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.340700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.340718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.340728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.353179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.353197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.353207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.365652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.365670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.365676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.378177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.378196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.378207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.389982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.389999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.390006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.401324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.401341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.401347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.414917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.414935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.414942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.426950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.426968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:6273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.426974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.439010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.439028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.439035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.450661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.450681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.450688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.463698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.463716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.463722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.475281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.475299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.475305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.396 [2024-07-25 10:17:35.487576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.396 [2024-07-25 10:17:35.487594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.396 [2024-07-25 10:17:35.487600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.397 [2024-07-25 10:17:35.499762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.397 [2024-07-25 10:17:35.499780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.397 [2024-07-25 10:17:35.499787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.397 [2024-07-25 10:17:35.512679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.397 [2024-07-25 10:17:35.512697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.397 [2024-07-25 10:17:35.512704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.397 [2024-07-25 10:17:35.524680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.397 [2024-07-25 10:17:35.524697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.397 [2024-07-25 10:17:35.524704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.535841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.535859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:21649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.535865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.548267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.548285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.548292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.560498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.560516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.560523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.573944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.573962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.573969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.585507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.585532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.598304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.598322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.598328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.609377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.609395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.609403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.621415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.621433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.621439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.633427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.633444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.633451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.645714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.645732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.645738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.658915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.658933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.658943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.671597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.671616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.671622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.683210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.683227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.683234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.695186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.695207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.695214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.707582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.707599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.707606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.719626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.719644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.719650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.731822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.731840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.731847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.744342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.744360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.744366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.756286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.756303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.756310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.769134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.769151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.769158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.659 [2024-07-25 10:17:35.780158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.659 [2024-07-25 10:17:35.780175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.659 [2024-07-25 10:17:35.780182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.920 [2024-07-25 10:17:35.792796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.920 [2024-07-25 10:17:35.792813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.920 [2024-07-25 10:17:35.792820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.920 [2024-07-25 10:17:35.805030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.920 [2024-07-25 10:17:35.805047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.920 [2024-07-25 10:17:35.805053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.920 [2024-07-25 10:17:35.817264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.920 [2024-07-25 10:17:35.817281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.920 [2024-07-25 10:17:35.817288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.920 [2024-07-25 10:17:35.829584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.920 [2024-07-25 10:17:35.829602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.829609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.840988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.841005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.841011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.853356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.853373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.853379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.866384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.866401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.866413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.878327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.878344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.878350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.890156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.890174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.890180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.901210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.901228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.901234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.914167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.914185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.914191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.925890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.925907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.925914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.938997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.939014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.939021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.950588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.950605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.950611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.963404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.963421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.963428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.975143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.975163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.975170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.987296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.987313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.987320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:35.998869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:35.998887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:35.998894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:36.011215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:36.011232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:36.011238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:36.023713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:36.023730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:36.023737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:36.036932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:36.036949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:36.036955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.921 [2024-07-25 10:17:36.048914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:56.921 [2024-07-25 10:17:36.048931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.921 [2024-07-25 10:17:36.048938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.061306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.061324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.061330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.073737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.073754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.073761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.085283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.085300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.085306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.097500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.097517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.097523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.110464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.110482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.110488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.122444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.122461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.122467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.183 [2024-07-25 10:17:36.135121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.183 [2024-07-25 10:17:36.135138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.183 [2024-07-25 10:17:36.135144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.147351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.147368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.147375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.159293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.159311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.159318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.170913] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.170930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.170936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.183955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.183972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.183982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.195506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.195523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.195529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.208852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.208868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.208874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.221241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.221258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.221264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.233219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.233236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.233243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.246282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.246299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:24783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.246305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.256370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.256387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.256394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.269663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.269680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.269686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.281984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.282001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.282007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.294239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.294256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.294263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.184 [2024-07-25 10:17:36.305921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.184 [2024-07-25 10:17:36.305938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.184 [2024-07-25 10:17:36.305944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.318221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.318239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.318246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.331337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.331354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.331361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.342468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.342485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.342492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.354831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.354848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.354855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.366635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.366652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.366659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.378727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.378745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.378751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.391766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.391784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.391793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.403835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.403853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.403859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.415517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.415533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.415540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.427730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.427747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.427753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.439908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.439924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.439931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.451899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.451917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.451923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.464034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.464051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:15179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.464057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.476081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.476099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.476105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.488884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.488901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.488907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.501781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.501802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.501808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.513166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.513183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.513189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.446 [2024-07-25 10:17:36.525843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.446 [2024-07-25 10:17:36.525860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.446 [2024-07-25 10:17:36.525866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.447 [2024-07-25 10:17:36.537914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.447 [2024-07-25 10:17:36.537931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.447 [2024-07-25 10:17:36.537937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.447 [2024-07-25 10:17:36.550130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.447 [2024-07-25 10:17:36.550147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.447 [2024-07-25 10:17:36.550154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.447 [2024-07-25 10:17:36.562183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.447 [2024-07-25 10:17:36.562204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:4319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.447 [2024-07-25 10:17:36.562211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.447 [2024-07-25 10:17:36.575519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.447 [2024-07-25 10:17:36.575538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.447 [2024-07-25 10:17:36.575544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.586205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.586223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.586229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.599171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.599187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.599194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.610475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.610492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.610498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.622916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.622932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.622939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.636457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.636473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.636480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.648378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.648396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.648402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.660730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.660747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.660754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.672377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.672393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.672400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.683837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.683854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.683860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.696143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.696159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.696166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.710393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.710411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.710420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.720539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.720556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.720562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.732561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.732578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.732584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.744525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.744542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.744548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.757344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.757361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.757368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.769299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.769316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.769322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.781350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.709 [2024-07-25 10:17:36.781368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.709 [2024-07-25 10:17:36.781374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.709 [2024-07-25 10:17:36.793350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.710 [2024-07-25 10:17:36.793367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:11434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.710 [2024-07-25 10:17:36.793374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.710 [2024-07-25 10:17:36.807399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.710 [2024-07-25 10:17:36.807417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.710 [2024-07-25 10:17:36.807423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.710 [2024-07-25 10:17:36.818004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.710 [2024-07-25 10:17:36.818025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.710 [2024-07-25 10:17:36.818031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.710 [2024-07-25 10:17:36.831347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.710 [2024-07-25 10:17:36.831365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.710 [2024-07-25 10:17:36.831372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.842497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.842514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.842521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.854581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.854598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.854604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.867467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.867485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.867491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.879930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.879947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.879954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.892597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.892614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.892620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.903639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.903657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.903664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.916214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.916231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.971 [2024-07-25 10:17:36.916241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.971 [2024-07-25 10:17:36.928946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.971 [2024-07-25 10:17:36.928963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.972 [2024-07-25 10:17:36.928970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.972 [2024-07-25 10:17:36.940730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x104ecd0) 00:28:57.972 [2024-07-25 10:17:36.940747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:57.972 [2024-07-25 10:17:36.940754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:57.972 00:28:57.972 Latency(us) 00:28:57.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.972 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:57.972 nvme0n1 : 2.00 20817.04 81.32 0.00 0.00 6141.94 3522.56 16493.23 00:28:57.972 =================================================================================================================== 00:28:57.972 Total : 20817.04 81.32 0.00 0.00 6141.94 3522.56 16493.23 00:28:57.972 0 00:28:57.972 10:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:57.972 10:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:57.972 | .driver_specific 00:28:57.972 | .nvme_error 00:28:57.972 | .status_code 00:28:57.972 | .command_transient_transport_error' 00:28:57.972 10:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:57.972 10:17:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 163 > 0 )) 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1465141 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1465141 ']' 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1465141 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465141 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465141' 00:28:58.233 killing process with pid 1465141 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1465141 00:28:58.233 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.233 00:28:58.233 Latency(us) 00:28:58.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.233 =================================================================================================================== 00:28:58.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1465141 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1465910 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1465910 /var/tmp/bperf.sock 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1465910 ']' 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:58.233 10:17:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:58.233 [2024-07-25 10:17:37.349243] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:58.233 [2024-07-25 10:17:37.349300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465910 ] 00:28:58.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.233 Zero copy mechanism will not be used. 00:28:58.494 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.494 [2024-07-25 10:17:37.424027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.494 [2024-07-25 10:17:37.477429] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.064 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:59.064 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:59.064 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.064 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:59.325 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:59.325 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.325 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.325 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.325 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.325 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.586 nvme0n1 00:28:59.586 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:59.586 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:59.586 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:59.586 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:59.586 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:59.586 10:17:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:59.846 Zero copy mechanism will not be used. 00:28:59.846 Running I/O for 2 seconds... 00:28:59.846 [2024-07-25 10:17:38.808496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.846 [2024-07-25 10:17:38.808526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.846 [2024-07-25 10:17:38.808535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.846 [2024-07-25 10:17:38.825500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.846 [2024-07-25 10:17:38.825521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.846 [2024-07-25 10:17:38.825527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.846 [2024-07-25 10:17:38.840166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.846 [2024-07-25 10:17:38.840187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.846 [2024-07-25 10:17:38.840193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.846 [2024-07-25 10:17:38.857775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.857795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.857802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.874182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.874206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.874213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.889264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.889284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.889291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.904438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.904460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.904467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.921282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.921300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.921307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.938895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.938914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.938920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.954638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.954656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.954663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.847 [2024-07-25 10:17:38.971681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:28:59.847 [2024-07-25 10:17:38.971701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.847 [2024-07-25 10:17:38.971707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:38.987223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:38.987243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:38.987249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.005035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.005053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.005060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.022248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.022266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.022273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.038100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.038118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.038124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.054271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.054290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.054296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.070652] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.070670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.070677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.088327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.088346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.088352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.102800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.102818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.102825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.119803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.119822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.119828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.136327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.136346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.136352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.108 [2024-07-25 10:17:39.153998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.108 [2024-07-25 10:17:39.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.108 [2024-07-25 10:17:39.154023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.109 [2024-07-25 10:17:39.170503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.109 [2024-07-25 10:17:39.170521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.109 [2024-07-25 10:17:39.170528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.109 [2024-07-25 10:17:39.187189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.109 [2024-07-25 10:17:39.187212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.109 [2024-07-25 10:17:39.187222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.109 [2024-07-25 10:17:39.203411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.109 [2024-07-25 10:17:39.203429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.109 [2024-07-25 10:17:39.203435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.109 [2024-07-25 10:17:39.220910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.109 [2024-07-25 10:17:39.220928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.109 [2024-07-25 10:17:39.220935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.109 [2024-07-25 10:17:39.237179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.109 [2024-07-25 10:17:39.237198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.109 [2024-07-25 10:17:39.237211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.253290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.253309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.253317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.269476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.269494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.269500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.288632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.288650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.288657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.304827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.304846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.304852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.320657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.320676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.320682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.335182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.335205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.335211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.353045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.353063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.353069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.369679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.369697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.369703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.385532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.385551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.385558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.403693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.403712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.403718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.420162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.420181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.420187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.435808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.435826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.435833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.453791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.453809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.453816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.468857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.468876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.468885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.484206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.484224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.484230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.371 [2024-07-25 10:17:39.503127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.371 [2024-07-25 10:17:39.503146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.371 [2024-07-25 10:17:39.503152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.515432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.515450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.515457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.531337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.531355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.531361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.549059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.549077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.549084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.564866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.564884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.564891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.581612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.581630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.581637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.597059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.597077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.597084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.613810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.613832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.613838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.631898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.631917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.631923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.647474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.647493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.647499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.663554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.663573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.663579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.680666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.680685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.680691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.696737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.633 [2024-07-25 10:17:39.696755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.633 [2024-07-25 10:17:39.696762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.633 [2024-07-25 10:17:39.712853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.634 [2024-07-25 10:17:39.712871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.634 [2024-07-25 10:17:39.712877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.634 [2024-07-25 10:17:39.728735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.634 [2024-07-25 10:17:39.728753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.634 [2024-07-25 10:17:39.728760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.634 [2024-07-25 10:17:39.746072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.634 [2024-07-25 10:17:39.746090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.634 [2024-07-25 10:17:39.746097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.634 [2024-07-25 10:17:39.762699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.634 [2024-07-25 10:17:39.762717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.634 [2024-07-25 10:17:39.762723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.895 [2024-07-25 10:17:39.780858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.895 [2024-07-25 10:17:39.780877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.780884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.796418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.796436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.796443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.813885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.813904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.813910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.830999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.831017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.831023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.848324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.848341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.848347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.865416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.865435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.865441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.881676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.881694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.881701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.898348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.898366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.898376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.916565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.916583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.916590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.932931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.932949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.932955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.948876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.948894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.948900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.966458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.966475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.966481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:39.983075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:39.983094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:39.983101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:40.001234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:40.001253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:40.001259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.896 [2024-07-25 10:17:40.017345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:00.896 [2024-07-25 10:17:40.017365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.896 [2024-07-25 10:17:40.017372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.032282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.032302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.032308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.046789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.046808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.046814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.063724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.063743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.063749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.080319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.080344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.098389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.098408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.098415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.114738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.114757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.114763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.131402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.131421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.131427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.147750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.147769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.147775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.165080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.165097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.165104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.158 [2024-07-25 10:17:40.181730] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.158 [2024-07-25 10:17:40.181748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.158 [2024-07-25 10:17:40.181759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.159 [2024-07-25 10:17:40.198621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.159 [2024-07-25 10:17:40.198640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.159 [2024-07-25 10:17:40.198646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.159 [2024-07-25 10:17:40.213850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.159 [2024-07-25 10:17:40.213868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.159 [2024-07-25 10:17:40.213875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.159 [2024-07-25 10:17:40.229711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.159 [2024-07-25 10:17:40.229730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.159 [2024-07-25 10:17:40.229736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.159 [2024-07-25 10:17:40.245253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.159 [2024-07-25 10:17:40.245272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.159 [2024-07-25 10:17:40.245278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.159 [2024-07-25 10:17:40.262269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.159 [2024-07-25 10:17:40.262288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.159 [2024-07-25 10:17:40.262295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.159 [2024-07-25 10:17:40.277707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.159 [2024-07-25 10:17:40.277725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.159 [2024-07-25 10:17:40.277731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.294528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.294547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.294554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.311843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.311862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.311869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.328785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.328808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.328815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.343602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.343621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.343628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.362005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.362024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.362030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.378003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.378022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.378028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.393824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.393843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.393850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.409631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.409650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.409656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.426206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.426225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.426232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.442070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.442090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.420 [2024-07-25 10:17:40.442096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.420 [2024-07-25 10:17:40.457875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.420 [2024-07-25 10:17:40.457895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.421 [2024-07-25 10:17:40.457901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.421 [2024-07-25 10:17:40.474528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.421 [2024-07-25 10:17:40.474547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.421 [2024-07-25 10:17:40.474553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.421 [2024-07-25 10:17:40.490358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.421 [2024-07-25 10:17:40.490378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.421 [2024-07-25 10:17:40.490384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.421 [2024-07-25 10:17:40.505869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.421 [2024-07-25 10:17:40.505887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.421 [2024-07-25 10:17:40.505894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.421 [2024-07-25 10:17:40.522346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.421 [2024-07-25 10:17:40.522365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.421 [2024-07-25 10:17:40.522372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.421 [2024-07-25 10:17:40.538841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.421 [2024-07-25 10:17:40.538860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.421 [2024-07-25 10:17:40.538866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.555118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.555138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.555144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.571556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.571574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.571580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.587048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.587067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.587074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.602632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.602651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.602662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.619685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.619704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.619710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.636424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.636443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.636449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.653970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.653988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.653994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.671270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.671290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.671296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.687791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.687809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.687816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.704007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.704026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.704033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.720178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.682 [2024-07-25 10:17:40.720197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.682 [2024-07-25 10:17:40.720208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.682 [2024-07-25 10:17:40.736632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.683 [2024-07-25 10:17:40.736650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.683 [2024-07-25 10:17:40.736657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.683 [2024-07-25 10:17:40.752528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.683 [2024-07-25 10:17:40.752550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.683 [2024-07-25 10:17:40.752556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.683 [2024-07-25 10:17:40.766475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.683 [2024-07-25 10:17:40.766494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.683 [2024-07-25 10:17:40.766500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.683 [2024-07-25 10:17:40.783233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9df9f0) 00:29:01.683 [2024-07-25 10:17:40.783252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.683 [2024-07-25 10:17:40.783258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.683 00:29:01.683 Latency(us) 00:29:01.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.683 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:01.683 nvme0n1 : 2.01 1878.24 234.78 0.00 0.00 8515.06 6171.31 19005.44 00:29:01.683 =================================================================================================================== 00:29:01.683 Total : 1878.24 234.78 0.00 0.00 8515.06 6171.31 19005.44 00:29:01.683 0 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:01.943 | .driver_specific 00:29:01.943 | .nvme_error 00:29:01.943 | .status_code 00:29:01.943 | .command_transient_transport_error' 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1465910 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1465910 ']' 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1465910 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:01.943 10:17:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1465910 00:29:01.943 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:01.943 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:01.943 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1465910' 00:29:01.943 killing process with pid 1465910 00:29:01.944 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1465910 00:29:01.944 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.944 00:29:01.944 Latency(us) 00:29:01.944 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.944 =================================================================================================================== 00:29:01.944 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.944 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1465910 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1466687 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1466687 /var/tmp/bperf.sock 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1466687 ']' 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:02.205 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:02.205 [2024-07-25 10:17:41.198174] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:02.205 [2024-07-25 10:17:41.198231] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466687 ] 00:29:02.205 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.205 [2024-07-25 10:17:41.271812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.205 [2024-07-25 10:17:41.323560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:03.149 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:03.149 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:03.149 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.149 10:17:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:03.149 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:03.149 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.149 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.149 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.149 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.149 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:03.411 nvme0n1 00:29:03.411 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:03.411 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:03.411 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:03.411 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:03.411 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:03.411 10:17:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.411 Running I/O for 2 seconds... 00:29:03.411 [2024-07-25 10:17:42.542250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e88f8 00:29:03.411 [2024-07-25 10:17:42.543018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.411 [2024-07-25 10:17:42.543047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.555156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4298 00:29:03.673 [2024-07-25 10:17:42.555923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.555942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.568315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6020 00:29:03.673 [2024-07-25 10:17:42.569293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.569310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.581166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4f40 00:29:03.673 [2024-07-25 10:17:42.582148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.582165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.594057] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f1868 00:29:03.673 [2024-07-25 10:17:42.595020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.595038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.606895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6fa8 00:29:03.673 [2024-07-25 10:17:42.607860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.607877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.618631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f46d0 00:29:03.673 [2024-07-25 10:17:42.619593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.619609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.632465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e3060 00:29:03.673 [2024-07-25 10:17:42.633433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.633450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.645166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f46d0 00:29:03.673 [2024-07-25 10:17:42.646125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.646141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.657908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e8088 00:29:03.673 [2024-07-25 10:17:42.658853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.658870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.670704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e9168 00:29:03.673 [2024-07-25 10:17:42.671670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.671686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.682475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f20d8 00:29:03.673 [2024-07-25 10:17:42.683410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.683426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.696317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4f40 00:29:03.673 [2024-07-25 10:17:42.697243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.697258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.709107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6020 00:29:03.673 [2024-07-25 10:17:42.710036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.710052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.723434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7100 00:29:03.673 [2024-07-25 10:17:42.725019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.725035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.734648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ecc78 00:29:03.673 [2024-07-25 10:17:42.735596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.735612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.747389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ed4e8 00:29:03.673 [2024-07-25 10:17:42.748335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.748350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.760132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eee38 00:29:03.673 [2024-07-25 10:17:42.761083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.761099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.772848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f3a28 00:29:03.673 [2024-07-25 10:17:42.773794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.773810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.784593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fc560 00:29:03.673 [2024-07-25 10:17:42.785524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.785539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:03.673 [2024-07-25 10:17:42.798434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fd640 00:29:03.673 [2024-07-25 10:17:42.799415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.673 [2024-07-25 10:17:42.799431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.811174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fef90 00:29:03.935 [2024-07-25 10:17:42.812105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.812120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.825478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fc128 00:29:03.935 [2024-07-25 10:17:42.827064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.827080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.836773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e1b48 00:29:03.935 [2024-07-25 10:17:42.837721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.837740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.849520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e0a68 00:29:03.935 [2024-07-25 10:17:42.850463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.850478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.862247] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eb328 00:29:03.935 [2024-07-25 10:17:42.863193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.863211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.874975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eff18 00:29:03.935 [2024-07-25 10:17:42.875921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.875936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.935 [2024-07-25 10:17:42.887758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e9168 00:29:03.935 [2024-07-25 10:17:42.888703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.935 [2024-07-25 10:17:42.888719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.900557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e8088 00:29:03.936 [2024-07-25 10:17:42.901503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.901518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.913452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f46d0 00:29:03.936 [2024-07-25 10:17:42.914378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.914394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.926272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f57b0 00:29:03.936 [2024-07-25 10:17:42.927196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.927214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.939027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6890 00:29:03.936 [2024-07-25 10:17:42.939955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.939971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.951819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7970 00:29:03.936 [2024-07-25 10:17:42.952749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.952764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.964574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f8a50 00:29:03.936 [2024-07-25 10:17:42.965501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.965516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.977333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f9b30 00:29:03.936 [2024-07-25 10:17:42.978255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.978271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:42.990081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fac10 00:29:03.936 [2024-07-25 10:17:42.991011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:42.991026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:43.002828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e49b0 00:29:03.936 [2024-07-25 10:17:43.003773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:43.003789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:43.014645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4298 00:29:03.936 [2024-07-25 10:17:43.015579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:43.015595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:43.028616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e5220 00:29:03.936 [2024-07-25 10:17:43.029548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:43.029564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:43.041425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ddc00 00:29:03.936 [2024-07-25 10:17:43.042349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:43.042365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:43.054123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eee38 00:29:03.936 [2024-07-25 10:17:43.055055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:43.055071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.936 [2024-07-25 10:17:43.066859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e88f8 00:29:03.936 [2024-07-25 10:17:43.067808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.936 [2024-07-25 10:17:43.067823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.079634] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e7818 00:29:04.198 [2024-07-25 10:17:43.080580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.080595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.093868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6738 00:29:04.198 [2024-07-25 10:17:43.095447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.095462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.105048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190edd58 00:29:04.198 [2024-07-25 10:17:43.105987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.106002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.117796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ee5c8 00:29:04.198 [2024-07-25 10:17:43.118733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.118749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.132043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e5a90 00:29:04.198 [2024-07-25 10:17:43.133612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.133628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.143282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f0ff8 00:29:04.198 [2024-07-25 10:17:43.144204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.144219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.156031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4298 00:29:04.198 [2024-07-25 10:17:43.156951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.156967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.168758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6890 00:29:04.198 [2024-07-25 10:17:43.169673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.169692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.181465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e27f0 00:29:04.198 [2024-07-25 10:17:43.182359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.182375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.194179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e1710 00:29:04.198 [2024-07-25 10:17:43.195078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.195093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.206936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e0630 00:29:04.198 [2024-07-25 10:17:43.207831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.207847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.219684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eaef0 00:29:04.198 [2024-07-25 10:17:43.220582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.220598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.231370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f2948 00:29:04.198 [2024-07-25 10:17:43.232261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.232276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.245167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ed4e8 00:29:04.198 [2024-07-25 10:17:43.246054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.246069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.257964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190df550 00:29:04.198 [2024-07-25 10:17:43.258870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.258885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.269732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f92c0 00:29:04.198 [2024-07-25 10:17:43.270624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.270639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.283469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fa3a0 00:29:04.198 [2024-07-25 10:17:43.284364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.284380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.296265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fb480 00:29:04.198 [2024-07-25 10:17:43.297143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.297158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.309068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fdeb0 00:29:04.198 [2024-07-25 10:17:43.309955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.309972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.198 [2024-07-25 10:17:43.321824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190dece0 00:29:04.198 [2024-07-25 10:17:43.322698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:23867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.198 [2024-07-25 10:17:43.322714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.336119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ec408 00:29:04.466 [2024-07-25 10:17:43.337653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.337668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.347325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6020 00:29:04.466 [2024-07-25 10:17:43.348215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.348231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.360056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eaef0 00:29:04.466 [2024-07-25 10:17:43.360934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.360950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.374288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190efae0 00:29:04.466 [2024-07-25 10:17:43.375821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.375837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.384526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e49b0 00:29:04.466 [2024-07-25 10:17:43.385407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.385423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.398186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190df550 00:29:04.466 [2024-07-25 10:17:43.399066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.399081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.410875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7970 00:29:04.466 [2024-07-25 10:17:43.411750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.411765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.422571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fdeb0 00:29:04.466 [2024-07-25 10:17:43.423449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.423464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.436331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ff3c8 00:29:04.466 [2024-07-25 10:17:43.437190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.437208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.449118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fd208 00:29:04.466 [2024-07-25 10:17:43.449986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.450001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.461918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e38d0 00:29:04.466 [2024-07-25 10:17:43.462803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.462818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.474661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ecc78 00:29:04.466 [2024-07-25 10:17:43.475546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.475562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.487367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fbcf0 00:29:04.466 [2024-07-25 10:17:43.488247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.488262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.500109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e49b0 00:29:04.466 [2024-07-25 10:17:43.500993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.501012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.512802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190df550 00:29:04.466 [2024-07-25 10:17:43.513690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.513706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.525531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7970 00:29:04.466 [2024-07-25 10:17:43.526412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.526428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.538281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4f40 00:29:04.466 [2024-07-25 10:17:43.539161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.539177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.551022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:04.466 [2024-07-25 10:17:43.551890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.551906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.563782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e3060 00:29:04.466 [2024-07-25 10:17:43.564651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.564667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.576527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6020 00:29:04.466 [2024-07-25 10:17:43.577392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:15909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.577407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.466 [2024-07-25 10:17:43.589292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e1710 00:29:04.466 [2024-07-25 10:17:43.590155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.466 [2024-07-25 10:17:43.590170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.726 [2024-07-25 10:17:43.602029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4f40 00:29:04.726 [2024-07-25 10:17:43.602886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.726 [2024-07-25 10:17:43.602901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.726 [2024-07-25 10:17:43.616322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7970 00:29:04.726 [2024-07-25 10:17:43.617837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.726 [2024-07-25 10:17:43.617853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.726 [2024-07-25 10:17:43.627565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f81e0 00:29:04.726 [2024-07-25 10:17:43.628439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.726 [2024-07-25 10:17:43.628455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:04.726 [2024-07-25 10:17:43.640293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f9b30 00:29:04.726 [2024-07-25 10:17:43.641158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.726 [2024-07-25 10:17:43.641173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:04.726 [2024-07-25 10:17:43.652979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7100 00:29:04.726 [2024-07-25 10:17:43.653854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.726 [2024-07-25 10:17:43.653869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.664724] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6738 00:29:04.727 [2024-07-25 10:17:43.665584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.665599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.678452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e7818 00:29:04.727 [2024-07-25 10:17:43.679306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:18473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.679321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.691143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e88f8 00:29:04.727 [2024-07-25 10:17:43.691998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.692013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.702900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e3060 00:29:04.727 [2024-07-25 10:17:43.703754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.703770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.716584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7100 00:29:04.727 [2024-07-25 10:17:43.717430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.717446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.728252] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e88f8 00:29:04.727 [2024-07-25 10:17:43.729090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.729106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.741957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f0788 00:29:04.727 [2024-07-25 10:17:43.742790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.742806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.754661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eaab8 00:29:04.727 [2024-07-25 10:17:43.755497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.755513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.769132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ebb98 00:29:04.727 [2024-07-25 10:17:43.770631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:10050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.770647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.780318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:04.727 [2024-07-25 10:17:43.781170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.781185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.792075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e7818 00:29:04.727 [2024-07-25 10:17:43.792928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:22302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.792943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.805879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e0ea0 00:29:04.727 [2024-07-25 10:17:43.806714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.806729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.818588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e4140 00:29:04.727 [2024-07-25 10:17:43.819423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.819439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.830386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f0788 00:29:04.727 [2024-07-25 10:17:43.831212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.831228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.844172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:04.727 [2024-07-25 10:17:43.845000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.845015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:04.727 [2024-07-25 10:17:43.855928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e4140 00:29:04.727 [2024-07-25 10:17:43.856751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.727 [2024-07-25 10:17:43.856767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:04.988 [2024-07-25 10:17:43.871286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ea248 00:29:04.988 [2024-07-25 10:17:43.872758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.988 [2024-07-25 10:17:43.872774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:04.988 [2024-07-25 10:17:43.882490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e3060 00:29:04.988 [2024-07-25 10:17:43.883323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.988 [2024-07-25 10:17:43.883339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:04.988 [2024-07-25 10:17:43.895180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e4140 00:29:04.988 [2024-07-25 10:17:43.896017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.988 [2024-07-25 10:17:43.896033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:04.988 [2024-07-25 10:17:43.907897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:04.988 [2024-07-25 10:17:43.908725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.988 [2024-07-25 10:17:43.908741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:04.988 [2024-07-25 10:17:43.919656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ed4e8 00:29:04.988 [2024-07-25 10:17:43.920469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.920485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:43.933426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e8088 00:29:04.989 [2024-07-25 10:17:43.934245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.934261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:43.946140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ed4e8 00:29:04.989 [2024-07-25 10:17:43.946957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.946976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:43.960352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6890 00:29:04.989 [2024-07-25 10:17:43.961792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.961808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:43.972998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eff18 00:29:04.989 [2024-07-25 10:17:43.974436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.974452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:43.984210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eff18 00:29:04.989 [2024-07-25 10:17:43.985000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.985016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:43.997064] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:04.989 [2024-07-25 10:17:43.997841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:43.997856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.009864] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:04.989 [2024-07-25 10:17:44.010658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.010674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.021657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6738 00:29:04.989 [2024-07-25 10:17:44.022443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.022459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.037135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f0788 00:29:04.989 [2024-07-25 10:17:44.038572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:24447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.038589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.048306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f0788 00:29:04.989 [2024-07-25 10:17:44.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.049111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.061036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e1710 00:29:04.989 [2024-07-25 10:17:44.061826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.061842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.076330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:04.989 [2024-07-25 10:17:44.077775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.077790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.089058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e3060 00:29:04.989 [2024-07-25 10:17:44.090508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.090523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.100877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6fa8 00:29:04.989 [2024-07-25 10:17:44.102302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.102317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:04.989 [2024-07-25 10:17:44.112150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6738 00:29:04.989 [2024-07-25 10:17:44.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.989 [2024-07-25 10:17:44.112950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.124820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ea248 00:29:05.251 [2024-07-25 10:17:44.125609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.125625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.139044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f2948 00:29:05.251 [2024-07-25 10:17:44.140464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.140479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.151701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e3060 00:29:05.251 [2024-07-25 10:17:44.153105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.153120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.164358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f2948 00:29:05.251 [2024-07-25 10:17:44.165751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.165767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.178086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f31b8 00:29:05.251 [2024-07-25 10:17:44.179499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.179515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.190857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:05.251 [2024-07-25 10:17:44.192254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.192270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.203571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:05.251 [2024-07-25 10:17:44.204949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.204965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.217852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:05.251 [2024-07-25 10:17:44.219879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.219895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.228140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e0ea0 00:29:05.251 [2024-07-25 10:17:44.229515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.229531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.241991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:05.251 [2024-07-25 10:17:44.243381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.243397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.254709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ea248 00:29:05.251 [2024-07-25 10:17:44.256097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.256113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.267400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e0ea0 00:29:05.251 [2024-07-25 10:17:44.268777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.268793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.280153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e6738 00:29:05.251 [2024-07-25 10:17:44.281527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.281545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.291907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6020 00:29:05.251 [2024-07-25 10:17:44.293267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.293283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.305675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e4140 00:29:05.251 [2024-07-25 10:17:44.307042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.307058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.320006] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ed4e8 00:29:05.251 [2024-07-25 10:17:44.322019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.322035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.330251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f6020 00:29:05.251 [2024-07-25 10:17:44.331601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.331617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.345510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:05.251 [2024-07-25 10:17:44.347518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.251 [2024-07-25 10:17:44.347534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:05.251 [2024-07-25 10:17:44.356700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e99d8 00:29:05.252 [2024-07-25 10:17:44.358054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.252 [2024-07-25 10:17:44.358070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:05.252 [2024-07-25 10:17:44.368461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e7818 00:29:05.252 [2024-07-25 10:17:44.369801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.252 [2024-07-25 10:17:44.369816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:05.252 [2024-07-25 10:17:44.382209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4298 00:29:05.252 [2024-07-25 10:17:44.383561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.252 [2024-07-25 10:17:44.383577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:05.512 [2024-07-25 10:17:44.396471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190e5220 00:29:05.512 [2024-07-25 10:17:44.398464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.512 [2024-07-25 10:17:44.398480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:05.512 [2024-07-25 10:17:44.407707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f20d8 00:29:05.512 [2024-07-25 10:17:44.409052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.409068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.420465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f57b0 00:29:05.513 [2024-07-25 10:17:44.421810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.421825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.433142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190ee5c8 00:29:05.513 [2024-07-25 10:17:44.434485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.434501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.447426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190edd58 00:29:05.513 [2024-07-25 10:17:44.449395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.449411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.457658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190de470 00:29:05.513 [2024-07-25 10:17:44.458975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.458990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.470401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f7970 00:29:05.513 [2024-07-25 10:17:44.471700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.471715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.484182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f4f40 00:29:05.513 [2024-07-25 10:17:44.485485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.485500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.498534] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190f46d0 00:29:05.513 [2024-07-25 10:17:44.500490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.500505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.508788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190eff18 00:29:05.513 [2024-07-25 10:17:44.510087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.510102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:05.513 [2024-07-25 10:17:44.521501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7560c0) with pdu=0x2000190fc560 00:29:05.513 [2024-07-25 10:17:44.522791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:18224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:05.513 [2024-07-25 10:17:44.522806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:05.513 00:29:05.513 Latency(us) 00:29:05.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.513 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:05.513 nvme0n1 : 2.00 19930.48 77.85 0.00 0.00 6413.36 3795.63 16384.00 00:29:05.513 =================================================================================================================== 00:29:05.513 Total : 19930.48 77.85 0.00 0.00 6413.36 3795.63 16384.00 00:29:05.513 0 00:29:05.513 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:05.513 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:05.513 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:05.513 | .driver_specific 00:29:05.513 | .nvme_error 00:29:05.513 | .status_code 00:29:05.513 | .command_transient_transport_error' 00:29:05.513 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 156 > 0 )) 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1466687 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1466687 ']' 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1466687 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1466687 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1466687' 00:29:05.775 killing process with pid 1466687 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1466687 00:29:05.775 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.775 00:29:05.775 Latency(us) 00:29:05.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.775 =================================================================================================================== 00:29:05.775 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1466687 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1467375 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1467375 /var/tmp/bperf.sock 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 1467375 ']' 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:05.775 10:17:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.036 [2024-07-25 10:17:44.937162] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:06.036 [2024-07-25 10:17:44.937224] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1467375 ] 00:29:06.036 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.036 Zero copy mechanism will not be used. 00:29:06.036 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.036 [2024-07-25 10:17:45.012142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.036 [2024-07-25 10:17:45.065403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.648 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:06.648 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:29:06.648 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.648 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.909 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:06.909 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.909 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.909 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.909 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.909 10:17:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.171 nvme0n1 00:29:07.171 10:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:07.171 10:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:07.171 10:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:07.171 10:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:07.171 10:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.171 10:17:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.171 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.171 Zero copy mechanism will not be used. 00:29:07.171 Running I/O for 2 seconds... 00:29:07.171 [2024-07-25 10:17:46.290737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.171 [2024-07-25 10:17:46.291086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.171 [2024-07-25 10:17:46.291111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.306096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.306367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.306388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.319628] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.320029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.320047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.332165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.332555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.332573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.345395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.345670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.345687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.358838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.359167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.359184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.372041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.372414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.372436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.385082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.385350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.433 [2024-07-25 10:17:46.385367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.433 [2024-07-25 10:17:46.398272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.433 [2024-07-25 10:17:46.398635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.398653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.411393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.411785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.411802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.424683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.424946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.438273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.438674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.438690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.451353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.451658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.451674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.466380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.466652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.466669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.480222] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.480581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.480598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.494588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.494944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.494961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.509086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.509456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.509473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.523342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.523733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.523750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.537639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.538011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.538029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.434 [2024-07-25 10:17:46.552600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.434 [2024-07-25 10:17:46.552991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.434 [2024-07-25 10:17:46.553008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.695 [2024-07-25 10:17:46.566839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.695 [2024-07-25 10:17:46.567234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.695 [2024-07-25 10:17:46.567250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.695 [2024-07-25 10:17:46.580560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.580947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.580964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.594192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.594367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.594383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.608294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.608545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.608561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.622918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.623255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.623272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.637567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.637963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.637979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.652170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.652342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.652357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.666772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.667141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.667158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.680842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.681176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.681193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.695103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.695482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.695499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.709138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.709496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.709513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.723010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.723358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.723375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.737462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.737838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.737858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.751661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.752062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.752079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.765562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.765921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.765938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.779900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.780304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.780321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.794333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.794740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.794757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.808220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.808665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.808681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.696 [2024-07-25 10:17:46.822184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.696 [2024-07-25 10:17:46.822583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.696 [2024-07-25 10:17:46.822600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.835650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.836015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.836033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.849187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.849608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.849625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.862958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.863212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.863228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.876670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.877020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.877037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.890368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.890710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.890726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.903380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.903721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.903738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.915229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.915560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.915577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.927701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.928080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.928097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.940319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.940656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.940673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.953932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.954354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.954371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.967122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.967512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.958 [2024-07-25 10:17:46.967532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.958 [2024-07-25 10:17:46.981424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.958 [2024-07-25 10:17:46.981691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:46.981708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:46.993637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:46.994101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:46.994119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.007345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.007558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.007574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.020896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.021144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.021161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.034240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.034492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.034509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.046738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.046988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.047005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.059033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.059456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.059473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.071835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.072232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.072249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.959 [2024-07-25 10:17:47.084359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:07.959 [2024-07-25 10:17:47.084720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.959 [2024-07-25 10:17:47.084737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.097504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.097886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.097902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.110242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.110679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.110695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.122982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.123358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.123375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.135316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.135580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.135596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.149056] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.149407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.149424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.162559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.162827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.162844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.175759] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.176141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.176158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.188434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.188766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.188783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.201647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.202006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.202022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.215939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.216253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.216271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.228332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.228628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.228645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.241044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.241406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.241423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.254156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.254591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.254607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.268127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.268498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.268514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.281078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.281439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.281456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.294162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.294505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.294520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.308214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.308532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.221 [2024-07-25 10:17:47.308551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.221 [2024-07-25 10:17:47.321485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.221 [2024-07-25 10:17:47.321848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.222 [2024-07-25 10:17:47.321865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.222 [2024-07-25 10:17:47.334962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.222 [2024-07-25 10:17:47.335328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.222 [2024-07-25 10:17:47.335345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.222 [2024-07-25 10:17:47.348814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.222 [2024-07-25 10:17:47.349128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.222 [2024-07-25 10:17:47.349144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.361968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.362227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.362243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.374505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.374792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.374808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.387147] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.387549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.387567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.399587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.399858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.399875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.413187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.413498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.413515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.425142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.425493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.425510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.437448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.437695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.437712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.451325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.451570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.451587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.463739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.463988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.464005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.475753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.476093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.476109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.488979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.489242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.489258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.503357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.503604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.503621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.516270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.516675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.516692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.529691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.530069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.530086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.483 [2024-07-25 10:17:47.544117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.483 [2024-07-25 10:17:47.544392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.483 [2024-07-25 10:17:47.544410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.484 [2024-07-25 10:17:47.558276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.484 [2024-07-25 10:17:47.558769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.484 [2024-07-25 10:17:47.558786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.484 [2024-07-25 10:17:47.573192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.484 [2024-07-25 10:17:47.573517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.484 [2024-07-25 10:17:47.573534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.484 [2024-07-25 10:17:47.587412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.484 [2024-07-25 10:17:47.587757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.484 [2024-07-25 10:17:47.587773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.484 [2024-07-25 10:17:47.602084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.484 [2024-07-25 10:17:47.602484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.484 [2024-07-25 10:17:47.602501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.616240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.616497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.616515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.629547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.629905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.629922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.643808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.644132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.644148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.657307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.657691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.657711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.670940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.671308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.671326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.685558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.685918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.685934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.699661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.745 [2024-07-25 10:17:47.699910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.745 [2024-07-25 10:17:47.699927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.745 [2024-07-25 10:17:47.713925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.714173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.714190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.727698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.727912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.727927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.742587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.742838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.742855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.756899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.757268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.757284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.771055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.771398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.771415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.785419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.785768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.785784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.798577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.798853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.798871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.812468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.812804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.812820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.825966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.826332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.826349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.840165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.840457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.840474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.852970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.853312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.853329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.746 [2024-07-25 10:17:47.866726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:08.746 [2024-07-25 10:17:47.867093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.746 [2024-07-25 10:17:47.867110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.879871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.880041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.880057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.892436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.892655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.892669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.906492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.906864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.906881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.920828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.921220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.921237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.934992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.935362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.935380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.949801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.950184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.950206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.962826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.963136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.963152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.976572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.976941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.976958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:47.990086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.007 [2024-07-25 10:17:47.990468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.007 [2024-07-25 10:17:47.990485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.007 [2024-07-25 10:17:48.004333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.004667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.004684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.019156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.019580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.019600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.032787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.033045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.033061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.046816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.047197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.047218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.061307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.061687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.061704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.075738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.076108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.076125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.089220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.089438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.089454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.103347] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.103753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.103770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.117600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.117810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.117825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.008 [2024-07-25 10:17:48.131489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.008 [2024-07-25 10:17:48.131933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.008 [2024-07-25 10:17:48.131950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.145800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.146156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.146174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.159461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.159802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.159819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.174154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.174526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.174543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.187820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.188205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.188222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.200658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.201037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.201054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.214082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.214513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.214530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.228888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.229221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.229238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.243178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.243647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.243665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.270 [2024-07-25 10:17:48.256314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x756400) with pdu=0x2000190fef90 00:29:09.270 [2024-07-25 10:17:48.256648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.270 [2024-07-25 10:17:48.256664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.270 00:29:09.270 Latency(us) 00:29:09.270 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.270 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:09.270 nvme0n1 : 2.01 2243.10 280.39 0.00 0.00 7118.73 5406.72 30146.56 00:29:09.270 =================================================================================================================== 00:29:09.270 Total : 2243.10 280.39 0.00 0.00 7118.73 5406.72 30146.56 00:29:09.270 0 00:29:09.270 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.270 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.270 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.270 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.270 | .driver_specific 00:29:09.270 | .nvme_error 00:29:09.270 | .status_code 00:29:09.270 | .command_transient_transport_error' 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1467375 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1467375 ']' 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1467375 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1467375 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1467375' 00:29:09.531 killing process with pid 1467375 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1467375 00:29:09.531 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.531 00:29:09.531 Latency(us) 00:29:09.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.531 =================================================================================================================== 00:29:09.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1467375 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1464977 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 1464977 ']' 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 1464977 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:09.531 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1464977 00:29:09.791 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:09.791 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:09.791 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1464977' 00:29:09.791 killing process with pid 1464977 00:29:09.791 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 1464977 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 1464977 00:29:09.792 00:29:09.792 real 0m16.176s 00:29:09.792 user 0m32.115s 00:29:09.792 sys 0m2.958s 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.792 ************************************ 00:29:09.792 END TEST nvmf_digest_error 00:29:09.792 ************************************ 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:09.792 rmmod nvme_tcp 00:29:09.792 rmmod nvme_fabrics 00:29:09.792 rmmod nvme_keyring 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1464977 ']' 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1464977 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 1464977 ']' 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 1464977 00:29:09.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1464977) - No such process 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 1464977 is not found' 00:29:09.792 Process with pid 1464977 is not found 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:09.792 10:17:48 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.341 10:17:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:12.341 00:29:12.341 real 0m41.599s 00:29:12.341 user 1m6.218s 00:29:12.341 sys 0m11.268s 00:29:12.341 10:17:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.341 10:17:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:12.341 ************************************ 00:29:12.341 END TEST nvmf_digest 00:29:12.341 ************************************ 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:12.341 ************************************ 00:29:12.341 START TEST nvmf_bdevperf 00:29:12.341 ************************************ 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:12.341 * Looking for test storage... 00:29:12.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.341 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.342 10:17:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:18.933 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:18.933 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:18.933 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:18.934 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:18.934 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:18.934 10:17:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:19.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:19.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.512 ms 00:29:19.194 00:29:19.194 --- 10.0.0.2 ping statistics --- 00:29:19.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.194 rtt min/avg/max/mdev = 0.512/0.512/0.512/0.000 ms 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:19.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:19.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:29:19.194 00:29:19.194 --- 10.0.0.1 ping statistics --- 00:29:19.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:19.194 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:19.194 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1472142 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1472142 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1472142 ']' 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:19.195 10:17:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:19.455 [2024-07-25 10:17:58.376597] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:19.455 [2024-07-25 10:17:58.376663] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:19.455 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.455 [2024-07-25 10:17:58.464257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:19.455 [2024-07-25 10:17:58.558968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:19.455 [2024-07-25 10:17:58.559030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:19.455 [2024-07-25 10:17:58.559039] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:19.455 [2024-07-25 10:17:58.559046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:19.455 [2024-07-25 10:17:58.559052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:19.455 [2024-07-25 10:17:58.559406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:19.455 [2024-07-25 10:17:58.559840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:19.455 [2024-07-25 10:17:58.559844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.397 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:20.397 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:20.397 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:20.397 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:20.397 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.398 [2024-07-25 10:17:59.208882] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.398 Malloc0 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:20.398 [2024-07-25 10:17:59.272156] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:20.398 { 00:29:20.398 "params": { 00:29:20.398 "name": "Nvme$subsystem", 00:29:20.398 "trtype": "$TEST_TRANSPORT", 00:29:20.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.398 "adrfam": "ipv4", 00:29:20.398 "trsvcid": "$NVMF_PORT", 00:29:20.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.398 "hdgst": ${hdgst:-false}, 00:29:20.398 "ddgst": ${ddgst:-false} 00:29:20.398 }, 00:29:20.398 "method": "bdev_nvme_attach_controller" 00:29:20.398 } 00:29:20.398 EOF 00:29:20.398 )") 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:20.398 10:17:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:20.398 "params": { 00:29:20.398 "name": "Nvme1", 00:29:20.398 "trtype": "tcp", 00:29:20.398 "traddr": "10.0.0.2", 00:29:20.398 "adrfam": "ipv4", 00:29:20.398 "trsvcid": "4420", 00:29:20.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.398 "hdgst": false, 00:29:20.398 "ddgst": false 00:29:20.398 }, 00:29:20.398 "method": "bdev_nvme_attach_controller" 00:29:20.398 }' 00:29:20.398 [2024-07-25 10:17:59.326781] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:20.398 [2024-07-25 10:17:59.326829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472417 ] 00:29:20.398 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.398 [2024-07-25 10:17:59.384628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.398 [2024-07-25 10:17:59.449085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.658 Running I/O for 1 seconds... 00:29:21.612 00:29:21.612 Latency(us) 00:29:21.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:21.612 Verification LBA range: start 0x0 length 0x4000 00:29:21.612 Nvme1n1 : 1.01 9566.47 37.37 0.00 0.00 13322.14 1925.12 12888.75 00:29:21.612 =================================================================================================================== 00:29:21.612 Total : 9566.47 37.37 0.00 0.00 13322.14 1925.12 12888.75 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1472793 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:21.613 { 00:29:21.613 "params": { 00:29:21.613 "name": "Nvme$subsystem", 00:29:21.613 "trtype": "$TEST_TRANSPORT", 00:29:21.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:21.613 "adrfam": "ipv4", 00:29:21.613 "trsvcid": "$NVMF_PORT", 00:29:21.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:21.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:21.613 "hdgst": ${hdgst:-false}, 00:29:21.613 "ddgst": ${ddgst:-false} 00:29:21.613 }, 00:29:21.613 "method": "bdev_nvme_attach_controller" 00:29:21.613 } 00:29:21.613 EOF 00:29:21.613 )") 00:29:21.613 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:21.874 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:21.874 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:21.874 10:18:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:21.874 "params": { 00:29:21.874 "name": "Nvme1", 00:29:21.874 "trtype": "tcp", 00:29:21.874 "traddr": "10.0.0.2", 00:29:21.874 "adrfam": "ipv4", 00:29:21.874 "trsvcid": "4420", 00:29:21.874 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:21.874 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:21.874 "hdgst": false, 00:29:21.874 "ddgst": false 00:29:21.874 }, 00:29:21.874 "method": "bdev_nvme_attach_controller" 00:29:21.874 }' 00:29:21.874 [2024-07-25 10:18:00.800641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:21.874 [2024-07-25 10:18:00.800710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1472793 ] 00:29:21.874 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.874 [2024-07-25 10:18:00.859094] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.874 [2024-07-25 10:18:00.922605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.134 Running I/O for 15 seconds... 00:29:24.683 10:18:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1472142 00:29:24.683 10:18:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:24.683 [2024-07-25 10:18:03.752261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:84752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:84864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:84880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:84888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:84896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:84912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:84920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:84928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.683 [2024-07-25 10:18:03.752743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:84936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.683 [2024-07-25 10:18:03.752751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:84976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:84992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.752983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.752992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:85120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:85200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:85232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.684 [2024-07-25 10:18:03.753395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.684 [2024-07-25 10:18:03.753402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:85280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:85296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:85304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:85344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:85392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.685 [2024-07-25 10:18:03.753817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.685 [2024-07-25 10:18:03.753834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.685 [2024-07-25 10:18:03.753850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.685 [2024-07-25 10:18:03.753866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.685 [2024-07-25 10:18:03.753882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.685 [2024-07-25 10:18:03.753892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.753907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.753923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.753939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.753957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.753973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.753990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.753997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:85752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.686 [2024-07-25 10:18:03.754533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc4a570 is same with the state(5) to be set 00:29:24.686 [2024-07-25 10:18:03.754550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:24.686 [2024-07-25 10:18:03.754556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:24.686 [2024-07-25 10:18:03.754563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:29:24.686 [2024-07-25 10:18:03.754572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.686 [2024-07-25 10:18:03.754609] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc4a570 was disconnected and freed. reset controller. 00:29:24.686 [2024-07-25 10:18:03.758145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.686 [2024-07-25 10:18:03.758190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.686 [2024-07-25 10:18:03.759097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.686 [2024-07-25 10:18:03.759113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.686 [2024-07-25 10:18:03.759121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.686 [2024-07-25 10:18:03.759347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.687 [2024-07-25 10:18:03.759567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.687 [2024-07-25 10:18:03.759575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.687 [2024-07-25 10:18:03.759584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.687 [2024-07-25 10:18:03.763142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.687 [2024-07-25 10:18:03.772368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.687 [2024-07-25 10:18:03.773056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.687 [2024-07-25 10:18:03.773072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.687 [2024-07-25 10:18:03.773080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.687 [2024-07-25 10:18:03.773306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.687 [2024-07-25 10:18:03.773526] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.687 [2024-07-25 10:18:03.773534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.687 [2024-07-25 10:18:03.773540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.687 [2024-07-25 10:18:03.777093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.687 [2024-07-25 10:18:03.786317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.687 [2024-07-25 10:18:03.787146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.687 [2024-07-25 10:18:03.787183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.687 [2024-07-25 10:18:03.787195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.687 [2024-07-25 10:18:03.787444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.687 [2024-07-25 10:18:03.787667] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.687 [2024-07-25 10:18:03.787677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.687 [2024-07-25 10:18:03.787684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.687 [2024-07-25 10:18:03.791278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.687 [2024-07-25 10:18:03.800299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.687 [2024-07-25 10:18:03.800997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.687 [2024-07-25 10:18:03.801016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.687 [2024-07-25 10:18:03.801024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.687 [2024-07-25 10:18:03.801251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.687 [2024-07-25 10:18:03.801471] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.687 [2024-07-25 10:18:03.801480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.687 [2024-07-25 10:18:03.801487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.687 [2024-07-25 10:18:03.805038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.814273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.815008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.815045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.815056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.815310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.815534] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.815542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.815550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.819114] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.828128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.828809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.828828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.828836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.829056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.829283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.829291] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.829298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.832855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.842070] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.842738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.842776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.842786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.843025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.843257] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.843266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.843274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.846829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.856052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.856825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.856864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.856874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.857113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.857346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.857356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.857369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.860925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.869940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.871237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.871268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.871279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.871519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.871742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.871751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.871758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.875321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.883937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.884630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.884648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.884656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.884876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.885096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.885104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.885111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.888666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.897888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.898635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.898672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.898683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.898922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.948 [2024-07-25 10:18:03.899144] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.948 [2024-07-25 10:18:03.899153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.948 [2024-07-25 10:18:03.899161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.948 [2024-07-25 10:18:03.902718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.948 [2024-07-25 10:18:03.911728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.948 [2024-07-25 10:18:03.912546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.948 [2024-07-25 10:18:03.912587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.948 [2024-07-25 10:18:03.912597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.948 [2024-07-25 10:18:03.912836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.913059] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.913067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.913075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.916634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:03.925644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:03.926442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:03.926479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:03.926490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:03.926728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.926951] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.926960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.926968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.930530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:03.939539] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:03.940300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:03.940337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:03.940349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:03.940592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.940814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.940822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.940830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.944387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:03.953393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:03.954178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:03.954221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:03.954233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:03.954476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.954704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.954713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.954721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.958279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:03.967274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:03.967996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:03.968014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:03.968022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:03.968247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.968467] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.968475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.968482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.972025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:03.981233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:03.981895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:03.981911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:03.981918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:03.982137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.982361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.982370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.982377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.985920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:03.995125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:03.995834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:03.995849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:03.995856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:03.996075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:03.996298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:03.996306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:03.996313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:03.999861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:04.009076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:04.009735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:04.009751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:04.009759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:04.009978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:04.010197] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:04.010210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:04.010217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:04.013762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:04.023191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:04.023732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:04.023748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:04.023755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:04.023974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:04.024194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:04.024206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:04.024214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:04.027760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:04.037171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:04.037837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:04.037853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:04.037861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:04.038079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:04.038304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.949 [2024-07-25 10:18:04.038313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.949 [2024-07-25 10:18:04.038320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.949 [2024-07-25 10:18:04.041870] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.949 [2024-07-25 10:18:04.051100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.949 [2024-07-25 10:18:04.051776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.949 [2024-07-25 10:18:04.051791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.949 [2024-07-25 10:18:04.051802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.949 [2024-07-25 10:18:04.052022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.949 [2024-07-25 10:18:04.052246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.950 [2024-07-25 10:18:04.052255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.950 [2024-07-25 10:18:04.052262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.950 [2024-07-25 10:18:04.055803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.950 [2024-07-25 10:18:04.065003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.950 [2024-07-25 10:18:04.065676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.950 [2024-07-25 10:18:04.065691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.950 [2024-07-25 10:18:04.065699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.950 [2024-07-25 10:18:04.065917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:24.950 [2024-07-25 10:18:04.066136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:24.950 [2024-07-25 10:18:04.066143] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:24.950 [2024-07-25 10:18:04.066150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:24.950 [2024-07-25 10:18:04.069698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:24.950 [2024-07-25 10:18:04.078899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:24.950 [2024-07-25 10:18:04.079648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.950 [2024-07-25 10:18:04.079664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:24.950 [2024-07-25 10:18:04.079671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:24.950 [2024-07-25 10:18:04.079890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.080109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.080118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.080126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.083676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.092876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.093557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.093573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.093580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.093799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.094018] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.094029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.094036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.097582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.106789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.107455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.107470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.107478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.107696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.107915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.107923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.107930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.111478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.120685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.121381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.121397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.121404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.121623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.121841] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.121850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.121858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.125409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.134609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.135285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.135300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.135307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.135526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.135745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.135753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.135760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.139306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.148507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.149188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.149233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.149244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.149483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.149707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.149716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.149723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.153275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.162478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.163197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.163221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.163229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.163449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.163668] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.211 [2024-07-25 10:18:04.163676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.211 [2024-07-25 10:18:04.163683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.211 [2024-07-25 10:18:04.167235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.211 [2024-07-25 10:18:04.176438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.211 [2024-07-25 10:18:04.177218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.211 [2024-07-25 10:18:04.177255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.211 [2024-07-25 10:18:04.177267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.211 [2024-07-25 10:18:04.177509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.211 [2024-07-25 10:18:04.177731] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.177741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.177748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.181306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.190302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.191071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.191107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.191118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.191368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.191592] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.191601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.191608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.195158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.204158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.204975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.205012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.205023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.205279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.205503] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.205511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.205519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.209070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.218065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.218836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.218873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.218884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.219123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.219363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.219374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.219381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.222930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.231926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.232709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.232746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.232756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.232995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.233227] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.233236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.233249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.236798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.245796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.246570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.246607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.246617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.246856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.247079] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.247087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.247094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.250656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.259653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.260319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.260356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.260368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.260611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.260833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.260842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.260849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.264409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.273617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.274301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.274338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.274348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.274587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.274810] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.274818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.274825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.278383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.287587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.288299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.288340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.288353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.288596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.288818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.288827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.288835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.292393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.301392] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.302115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.302133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.302141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.302368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.302588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.212 [2024-07-25 10:18:04.302595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.212 [2024-07-25 10:18:04.302602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.212 [2024-07-25 10:18:04.306184] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.212 [2024-07-25 10:18:04.315176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.212 [2024-07-25 10:18:04.315916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.212 [2024-07-25 10:18:04.315953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.212 [2024-07-25 10:18:04.315964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.212 [2024-07-25 10:18:04.316212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.212 [2024-07-25 10:18:04.316436] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.213 [2024-07-25 10:18:04.316444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.213 [2024-07-25 10:18:04.316452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.213 [2024-07-25 10:18:04.320013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.213 [2024-07-25 10:18:04.329015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.213 [2024-07-25 10:18:04.329798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.213 [2024-07-25 10:18:04.329835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.213 [2024-07-25 10:18:04.329845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.213 [2024-07-25 10:18:04.330084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.213 [2024-07-25 10:18:04.330320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.213 [2024-07-25 10:18:04.330330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.213 [2024-07-25 10:18:04.330337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.213 [2024-07-25 10:18:04.333887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.213 [2024-07-25 10:18:04.342881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.473 [2024-07-25 10:18:04.343642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.473 [2024-07-25 10:18:04.343679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.473 [2024-07-25 10:18:04.343690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.473 [2024-07-25 10:18:04.343929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.344152] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.344161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.344169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.347728] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.356724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.357489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.357526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.357536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.357775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.357998] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.358006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.358014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.361573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.370573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.371306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.371342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.371353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.371592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.371815] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.371823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.371831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.375395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.384389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.385156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.385193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.385212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.385452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.385675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.385683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.385690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.389240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.398233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.398917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.398954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.398965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.399217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.399441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.399449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.399457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.403008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.412227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.413034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.413070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.413081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.413329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.413553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.413561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.413569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.417118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.426121] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.426903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.426940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.426955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.427194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.427425] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.427434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.427442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.430994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.440039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.440656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.440693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.440704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.440942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.441165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.441173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.441181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.444742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.453953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.454763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.454800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.454811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.455049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.455284] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.455293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.455301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.458850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.467849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.468640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.468676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.468687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.468925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.469148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.474 [2024-07-25 10:18:04.469161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.474 [2024-07-25 10:18:04.469169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.474 [2024-07-25 10:18:04.472730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.474 [2024-07-25 10:18:04.481725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.474 [2024-07-25 10:18:04.482539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.474 [2024-07-25 10:18:04.482576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.474 [2024-07-25 10:18:04.482588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.474 [2024-07-25 10:18:04.482830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.474 [2024-07-25 10:18:04.483052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.483061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.483069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.486627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.495653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.496418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.496455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.496467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.496710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.496933] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.496941] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.496948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.500506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.509518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.510304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.510341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.510352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.510595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.510817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.510826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.510833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.514394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.523405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.524181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.524225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.524236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.524475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.524697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.524706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.524713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.528264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.537256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.537989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.538007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.538015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.538240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.538460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.538469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.538475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.542022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.551222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.551975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.552012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.552023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.552270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.552494] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.552502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.552510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.556058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.565059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.565869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.565907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.565917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.566164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.566396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.566405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.566412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.569962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.578959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.579711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.579748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.579759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.579998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.580229] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.580238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.580246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.583795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.475 [2024-07-25 10:18:04.592790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.475 [2024-07-25 10:18:04.593559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.475 [2024-07-25 10:18:04.593596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.475 [2024-07-25 10:18:04.593606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.475 [2024-07-25 10:18:04.593845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.475 [2024-07-25 10:18:04.594068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.475 [2024-07-25 10:18:04.594077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.475 [2024-07-25 10:18:04.594084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.475 [2024-07-25 10:18:04.597641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.606646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.607461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.607498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.607508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.607747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.607969] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.607978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.607990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.611549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.620553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.621299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.621337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.621349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.621589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.621811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.621820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.621827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.625386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.634389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.635140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.635177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.635187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.635435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.635659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.635667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.635675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.639228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.648228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.649034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.649071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.649081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.649329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.649553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.649562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.649569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.653118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.662123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.662925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.662965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.662976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.663229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.663453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.663461] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.663469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.667023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.676033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.676846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.676883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.676894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.677133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.677366] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.677376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.677383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.680936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.689937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.690624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.690643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.690651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.690871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.691090] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.691098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.691104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.694657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.703871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.704629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.737 [2024-07-25 10:18:04.704666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.737 [2024-07-25 10:18:04.704677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.737 [2024-07-25 10:18:04.704917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.737 [2024-07-25 10:18:04.705145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.737 [2024-07-25 10:18:04.705154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.737 [2024-07-25 10:18:04.705161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.737 [2024-07-25 10:18:04.708731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.737 [2024-07-25 10:18:04.717731] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.737 [2024-07-25 10:18:04.718530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.718567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.718578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.718817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.719039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.719048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.719055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.722622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.731621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.732443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.732480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.732490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.732729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.732952] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.732960] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.732968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.736523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.745522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.746281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.746319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.746329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.746569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.746792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.746801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.746809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.750372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.759366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.760178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.760222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.760233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.760473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.760697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.760706] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.760713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.764266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.773274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.774039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.774075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.774086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.774334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.774558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.774567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.774574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.778126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.787130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.787958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.787994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.788005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.788253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.788476] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.788485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.788493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.792048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.801129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.801852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.801872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.801884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.802104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.802331] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.802339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.802346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.805905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.815111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.815884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.815922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.815932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.816171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.816403] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.816413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.816421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.819974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.828991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.829760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.829797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.829807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.830046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.830279] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.830289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.830296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.833853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.842854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.843627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.843664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.843674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.843913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.844135] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.844148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.844156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.847715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:25.738 [2024-07-25 10:18:04.856717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:25.738 [2024-07-25 10:18:04.857425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:25.738 [2024-07-25 10:18:04.857444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:25.738 [2024-07-25 10:18:04.857452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:25.738 [2024-07-25 10:18:04.857672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:25.738 [2024-07-25 10:18:04.857891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:25.738 [2024-07-25 10:18:04.857898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:25.738 [2024-07-25 10:18:04.857905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:25.738 [2024-07-25 10:18:04.861454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.003 [2024-07-25 10:18:04.870662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.003 [2024-07-25 10:18:04.871420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.003 [2024-07-25 10:18:04.871457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.003 [2024-07-25 10:18:04.871467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.003 [2024-07-25 10:18:04.871706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.003 [2024-07-25 10:18:04.871928] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.003 [2024-07-25 10:18:04.871937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.003 [2024-07-25 10:18:04.871944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.003 [2024-07-25 10:18:04.875507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.003 [2024-07-25 10:18:04.884499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.003 [2024-07-25 10:18:04.885305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.003 [2024-07-25 10:18:04.885341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.003 [2024-07-25 10:18:04.885352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.003 [2024-07-25 10:18:04.885591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.003 [2024-07-25 10:18:04.885814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.003 [2024-07-25 10:18:04.885823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.003 [2024-07-25 10:18:04.885830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.003 [2024-07-25 10:18:04.889387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.003 [2024-07-25 10:18:04.898383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.003 [2024-07-25 10:18:04.899187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.003 [2024-07-25 10:18:04.899230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.003 [2024-07-25 10:18:04.899243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.003 [2024-07-25 10:18:04.899482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.003 [2024-07-25 10:18:04.899705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.003 [2024-07-25 10:18:04.899714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.003 [2024-07-25 10:18:04.899721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.003 [2024-07-25 10:18:04.903273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.003 [2024-07-25 10:18:04.912276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.003 [2024-07-25 10:18:04.913081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.003 [2024-07-25 10:18:04.913117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.003 [2024-07-25 10:18:04.913127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.003 [2024-07-25 10:18:04.913374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.003 [2024-07-25 10:18:04.913598] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.003 [2024-07-25 10:18:04.913606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.003 [2024-07-25 10:18:04.913614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.003 [2024-07-25 10:18:04.917161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:04.926164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:04.926894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:04.926913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:04.926921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:04.927141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:04.927367] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:04.927376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:04.927383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:04.930928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:04.940122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:04.940873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:04.940910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:04.940920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:04.941163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:04.941396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:04.941406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:04.941413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:04.944960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:04.953952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:04.954642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:04.954661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:04.954669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:04.954888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:04.955107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:04.955115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:04.955122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:04.958669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:04.967863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:04.968630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:04.968667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:04.968677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:04.968916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:04.969139] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:04.969148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:04.969155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:04.972714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:04.981712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:04.982391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:04.982411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:04.982418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:04.982639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:04.982857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:04.982865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:04.982876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:04.986424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:04.995623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:04.996421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:04.996458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:04.996468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:04.996708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:04.996931] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:04.996939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:04.996946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:05.000505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:05.009513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:05.010279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:05.010317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:05.010329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:05.010569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:05.010792] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:05.010800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:05.010807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:05.014368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:05.023527] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:05.024289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:05.024326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:05.024338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:05.024580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:05.024803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:05.024811] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:05.024819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:05.028373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:05.037371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:05.038183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:05.038231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:05.038242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:05.038481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:05.038704] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:05.038713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:05.038720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:05.042274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.004 [2024-07-25 10:18:05.051269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.004 [2024-07-25 10:18:05.051878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.004 [2024-07-25 10:18:05.051914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.004 [2024-07-25 10:18:05.051925] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.004 [2024-07-25 10:18:05.052164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.004 [2024-07-25 10:18:05.052396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.004 [2024-07-25 10:18:05.052406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.004 [2024-07-25 10:18:05.052413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.004 [2024-07-25 10:18:05.055962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.005 [2024-07-25 10:18:05.065169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.005 [2024-07-25 10:18:05.065907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.005 [2024-07-25 10:18:05.065944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.005 [2024-07-25 10:18:05.065954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.005 [2024-07-25 10:18:05.066193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.005 [2024-07-25 10:18:05.066426] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.005 [2024-07-25 10:18:05.066435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.005 [2024-07-25 10:18:05.066442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.005 [2024-07-25 10:18:05.069991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.005 [2024-07-25 10:18:05.078984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.005 [2024-07-25 10:18:05.079792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.005 [2024-07-25 10:18:05.079828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.005 [2024-07-25 10:18:05.079839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.005 [2024-07-25 10:18:05.080078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.005 [2024-07-25 10:18:05.080317] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.005 [2024-07-25 10:18:05.080327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.005 [2024-07-25 10:18:05.080334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.005 [2024-07-25 10:18:05.083883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.005 [2024-07-25 10:18:05.092875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.005 [2024-07-25 10:18:05.093684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.005 [2024-07-25 10:18:05.093721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.005 [2024-07-25 10:18:05.093732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.005 [2024-07-25 10:18:05.093970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.005 [2024-07-25 10:18:05.094193] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.005 [2024-07-25 10:18:05.094210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.005 [2024-07-25 10:18:05.094218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.005 [2024-07-25 10:18:05.097767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.005 [2024-07-25 10:18:05.106759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.005 [2024-07-25 10:18:05.107525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.005 [2024-07-25 10:18:05.107562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.005 [2024-07-25 10:18:05.107572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.005 [2024-07-25 10:18:05.107812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.005 [2024-07-25 10:18:05.108034] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.005 [2024-07-25 10:18:05.108043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.005 [2024-07-25 10:18:05.108050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.005 [2024-07-25 10:18:05.111619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.005 [2024-07-25 10:18:05.120612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.005 [2024-07-25 10:18:05.121374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.005 [2024-07-25 10:18:05.121411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.005 [2024-07-25 10:18:05.121422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.005 [2024-07-25 10:18:05.121661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.005 [2024-07-25 10:18:05.121884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.005 [2024-07-25 10:18:05.121892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.005 [2024-07-25 10:18:05.121900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.005 [2024-07-25 10:18:05.125471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.134473] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.135173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.135217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.135229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.135472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.135695] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.135705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.135713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.139270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.148268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.148976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.148995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.149003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.149228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.149449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.149458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.149465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.153010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.162210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.162904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.162920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.162928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.163147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.163371] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.163379] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.163386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.166931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.176141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.176887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.176903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.176916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.177135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.177360] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.177369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.177376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.180922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.190128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.190875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.190912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.190922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.191161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.191391] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.191400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.191408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.194961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.203997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.204678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.204698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.204705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.204925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.205145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.205153] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.205160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.208722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.217926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.218689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.218725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.218736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.218975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.219198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.219219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.219227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.222802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.231809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.232611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.232648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.232659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.232898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.233120] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.233129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.233136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.331 [2024-07-25 10:18:05.236696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.331 [2024-07-25 10:18:05.245708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.331 [2024-07-25 10:18:05.246534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.331 [2024-07-25 10:18:05.246571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.331 [2024-07-25 10:18:05.246582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.331 [2024-07-25 10:18:05.246821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.331 [2024-07-25 10:18:05.247044] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.331 [2024-07-25 10:18:05.247053] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.331 [2024-07-25 10:18:05.247060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.250618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.259617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.260493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.260530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.260543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.260785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.261007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.261016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.261024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.264601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.273423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.274080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.274116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.274128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.274379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.274603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.274611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.274619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.278176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.287415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.287975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.287995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.288003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.288231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.288451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.288460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.288467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.292019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.301237] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.302010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.302047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.302058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.302307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.302530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.302539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.302546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.306100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.315128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.315698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.315717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.315724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.315949] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.316168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.316177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.316184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.319741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.328963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.329642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.329658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.329666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.329885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.330103] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.330112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.330118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.333673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.342887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.343543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.343560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.343568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.343786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.344005] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.344012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.344019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.347574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.356786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.357545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.357582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.357592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.357831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.358054] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.358062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.358074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.361633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.370635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.371312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.371331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.371339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.371559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.371778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.371786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.371792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.375344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.384546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.332 [2024-07-25 10:18:05.385308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.332 [2024-07-25 10:18:05.385345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.332 [2024-07-25 10:18:05.385357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.332 [2024-07-25 10:18:05.385597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.332 [2024-07-25 10:18:05.385820] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.332 [2024-07-25 10:18:05.385829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.332 [2024-07-25 10:18:05.385837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.332 [2024-07-25 10:18:05.389394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.332 [2024-07-25 10:18:05.398399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.333 [2024-07-25 10:18:05.399121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.333 [2024-07-25 10:18:05.399139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.333 [2024-07-25 10:18:05.399147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.333 [2024-07-25 10:18:05.399374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.333 [2024-07-25 10:18:05.399594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.333 [2024-07-25 10:18:05.399602] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.333 [2024-07-25 10:18:05.399609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.333 [2024-07-25 10:18:05.403152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.333 [2024-07-25 10:18:05.412367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.333 [2024-07-25 10:18:05.413167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.333 [2024-07-25 10:18:05.413211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.333 [2024-07-25 10:18:05.413224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.333 [2024-07-25 10:18:05.413464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.333 [2024-07-25 10:18:05.413687] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.333 [2024-07-25 10:18:05.413695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.333 [2024-07-25 10:18:05.413703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.333 [2024-07-25 10:18:05.417257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.333 [2024-07-25 10:18:05.426263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.333 [2024-07-25 10:18:05.427078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.333 [2024-07-25 10:18:05.427115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.333 [2024-07-25 10:18:05.427126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.333 [2024-07-25 10:18:05.427373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.333 [2024-07-25 10:18:05.427597] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.333 [2024-07-25 10:18:05.427605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.333 [2024-07-25 10:18:05.427613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.333 [2024-07-25 10:18:05.431165] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.333 [2024-07-25 10:18:05.440167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.333 [2024-07-25 10:18:05.440848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.333 [2024-07-25 10:18:05.440867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.333 [2024-07-25 10:18:05.440875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.333 [2024-07-25 10:18:05.441095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.333 [2024-07-25 10:18:05.441318] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.333 [2024-07-25 10:18:05.441327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.333 [2024-07-25 10:18:05.441334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.333 [2024-07-25 10:18:05.444881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.333 [2024-07-25 10:18:05.454079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.333 [2024-07-25 10:18:05.454842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.333 [2024-07-25 10:18:05.454879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.333 [2024-07-25 10:18:05.454891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.333 [2024-07-25 10:18:05.455133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.333 [2024-07-25 10:18:05.455369] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.333 [2024-07-25 10:18:05.455378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.333 [2024-07-25 10:18:05.455386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.333 [2024-07-25 10:18:05.458938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.595 [2024-07-25 10:18:05.467937] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.595 [2024-07-25 10:18:05.468732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.595 [2024-07-25 10:18:05.468769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.595 [2024-07-25 10:18:05.468780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.595 [2024-07-25 10:18:05.469019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.595 [2024-07-25 10:18:05.469250] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.595 [2024-07-25 10:18:05.469260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.595 [2024-07-25 10:18:05.469267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.595 [2024-07-25 10:18:05.472819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.595 [2024-07-25 10:18:05.481823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.595 [2024-07-25 10:18:05.482593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.595 [2024-07-25 10:18:05.482631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.595 [2024-07-25 10:18:05.482642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.595 [2024-07-25 10:18:05.482881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.595 [2024-07-25 10:18:05.483104] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.595 [2024-07-25 10:18:05.483113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.595 [2024-07-25 10:18:05.483120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.595 [2024-07-25 10:18:05.486680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.595 [2024-07-25 10:18:05.495679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.595 [2024-07-25 10:18:05.496521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.595 [2024-07-25 10:18:05.496558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.595 [2024-07-25 10:18:05.496569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.595 [2024-07-25 10:18:05.496807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.595 [2024-07-25 10:18:05.497030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.595 [2024-07-25 10:18:05.497039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.595 [2024-07-25 10:18:05.497046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.595 [2024-07-25 10:18:05.500616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.595 [2024-07-25 10:18:05.509625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.595 [2024-07-25 10:18:05.510445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.595 [2024-07-25 10:18:05.510482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.595 [2024-07-25 10:18:05.510493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.595 [2024-07-25 10:18:05.510732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.595 [2024-07-25 10:18:05.510954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.510963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.510971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.514528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.523536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.524304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.524341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.524352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.524594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.524817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.524826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.524833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.528396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.537394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.538191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.538234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.538245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.538484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.538707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.538715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.538722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.542275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.551274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.552042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.552079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.552094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.552343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.552567] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.552576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.552583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.556133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.565131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.565850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.565869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.565877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.566097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.566320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.566328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.566335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.569881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.579088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.579846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.579882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.579893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.580132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.580363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.580373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.580381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.583932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.592931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.593743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.593780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.593791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.594030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.594260] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.594274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.594282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.597832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.606837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.607604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.607641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.607651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.607890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.608113] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.608121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.608128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.611697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.620696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.621402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.621421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.621429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.621649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.621868] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.621875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.621882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.625440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.634646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.596 [2024-07-25 10:18:05.635325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.596 [2024-07-25 10:18:05.635362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.596 [2024-07-25 10:18:05.635374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.596 [2024-07-25 10:18:05.635615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.596 [2024-07-25 10:18:05.635838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.596 [2024-07-25 10:18:05.635848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.596 [2024-07-25 10:18:05.635855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.596 [2024-07-25 10:18:05.639412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.596 [2024-07-25 10:18:05.648627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.597 [2024-07-25 10:18:05.649445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.597 [2024-07-25 10:18:05.649482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.597 [2024-07-25 10:18:05.649493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.597 [2024-07-25 10:18:05.649732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.597 [2024-07-25 10:18:05.649954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.597 [2024-07-25 10:18:05.649963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.597 [2024-07-25 10:18:05.649971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.597 [2024-07-25 10:18:05.653532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.597 [2024-07-25 10:18:05.662529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.597 [2024-07-25 10:18:05.663292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.597 [2024-07-25 10:18:05.663329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.597 [2024-07-25 10:18:05.663341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.597 [2024-07-25 10:18:05.663584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.597 [2024-07-25 10:18:05.663806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.597 [2024-07-25 10:18:05.663815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.597 [2024-07-25 10:18:05.663822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.597 [2024-07-25 10:18:05.667381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.597 [2024-07-25 10:18:05.676387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.597 [2024-07-25 10:18:05.677066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.597 [2024-07-25 10:18:05.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.597 [2024-07-25 10:18:05.677094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.597 [2024-07-25 10:18:05.677318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.597 [2024-07-25 10:18:05.677538] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.597 [2024-07-25 10:18:05.677546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.597 [2024-07-25 10:18:05.677553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.597 [2024-07-25 10:18:05.681099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.597 [2024-07-25 10:18:05.690305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.597 [2024-07-25 10:18:05.690875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.597 [2024-07-25 10:18:05.690891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.597 [2024-07-25 10:18:05.690898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.597 [2024-07-25 10:18:05.691122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.597 [2024-07-25 10:18:05.691347] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.597 [2024-07-25 10:18:05.691356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.597 [2024-07-25 10:18:05.691363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.597 [2024-07-25 10:18:05.694907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.597 [2024-07-25 10:18:05.704106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.597 [2024-07-25 10:18:05.704782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.597 [2024-07-25 10:18:05.704797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.597 [2024-07-25 10:18:05.704804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.597 [2024-07-25 10:18:05.705023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.597 [2024-07-25 10:18:05.705245] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.597 [2024-07-25 10:18:05.705254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.597 [2024-07-25 10:18:05.705260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.597 [2024-07-25 10:18:05.708803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.597 [2024-07-25 10:18:05.718016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.597 [2024-07-25 10:18:05.718673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.597 [2024-07-25 10:18:05.718688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.597 [2024-07-25 10:18:05.718695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.597 [2024-07-25 10:18:05.718913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.597 [2024-07-25 10:18:05.719132] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.597 [2024-07-25 10:18:05.719140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.597 [2024-07-25 10:18:05.719147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.597 [2024-07-25 10:18:05.722696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.860 [2024-07-25 10:18:05.731904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.860 [2024-07-25 10:18:05.732576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.860 [2024-07-25 10:18:05.732592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.860 [2024-07-25 10:18:05.732599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.860 [2024-07-25 10:18:05.732819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.860 [2024-07-25 10:18:05.733038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.860 [2024-07-25 10:18:05.733046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.860 [2024-07-25 10:18:05.733056] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.860 [2024-07-25 10:18:05.736604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.860 [2024-07-25 10:18:05.745804] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.860 [2024-07-25 10:18:05.746485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.860 [2024-07-25 10:18:05.746523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.860 [2024-07-25 10:18:05.746534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.860 [2024-07-25 10:18:05.746772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.746996] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.747005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.747012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.750571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.759783] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.760554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.760591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.760603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.760845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.761068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.761078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.761086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.764649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.773656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.774485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.774521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.774532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.774771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.774994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.775003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.775010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.778567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.787567] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.788287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.788306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.788314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.788533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.788752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.788760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.788767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.792318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.801524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.802174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.802218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.802229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.802468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.802692] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.802700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.802707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.806264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.815488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.816163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.816181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.816189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.816414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.816634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.816641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.816648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.820194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.829410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.830703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.830735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.830745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.830984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.831219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.831229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.831236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.834788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.843242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.844033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.844070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.844080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.844329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.844553] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.844563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.844571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.848121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.857132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.857922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.857959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.857970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.858217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.858441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.858450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.858457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.862009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.871015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.871804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.871841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.871853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.872093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.872323] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.861 [2024-07-25 10:18:05.872332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.861 [2024-07-25 10:18:05.872339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.861 [2024-07-25 10:18:05.875893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.861 [2024-07-25 10:18:05.884887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.861 [2024-07-25 10:18:05.885506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.861 [2024-07-25 10:18:05.885525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.861 [2024-07-25 10:18:05.885533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.861 [2024-07-25 10:18:05.885753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.861 [2024-07-25 10:18:05.885972] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.885980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.885987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.889538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.898740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.899404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.899441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.899451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.899690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.899912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.899921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.899928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.903484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.912709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.913479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.913515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.913527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.913770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.913993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.914001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.914009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.917570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.926579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.927186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.927209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.927222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.927442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.927661] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.927669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.927676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.931225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.940426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.941133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.941149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.941156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.941380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.941599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.941607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.941614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.945155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.954359] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.955063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.955078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.955086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.955310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.955530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.955537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.955544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.959086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.968282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.968948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.968963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.968970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.969189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.969413] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.969425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.969432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.972975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:26.862 [2024-07-25 10:18:05.982171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:26.862 [2024-07-25 10:18:05.982831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:26.862 [2024-07-25 10:18:05.982846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:26.862 [2024-07-25 10:18:05.982854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:26.862 [2024-07-25 10:18:05.983073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:26.862 [2024-07-25 10:18:05.983296] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:26.862 [2024-07-25 10:18:05.983305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:26.862 [2024-07-25 10:18:05.983311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:26.862 [2024-07-25 10:18:05.986854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:05.996050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:05.996597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:05.996612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:05.996619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:05.996838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:05.997057] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:05.997064] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:05.997071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.000617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.010037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.010703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.010718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.010726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.010944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.011163] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.011171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.011178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.014726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.024091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.024789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.024806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.024814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.025033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.025261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.025270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.025278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.028829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.038026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.038693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.038709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.038717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.038936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.039155] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.039162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.039169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.042719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.051912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.052577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.052592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.052600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.052818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.053037] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.053044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.053051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.056601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.065797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.066454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.066469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.066476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.066699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.066917] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.066925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.066932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.070479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.079673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.080484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.080520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.080530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.080770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.080992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.081000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.081008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.084567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.093562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.094300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.094337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.094349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.094589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.094812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.094821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.094828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.098386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.107394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.108224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.108261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.108273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.108516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.108738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.108747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.108759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.112329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.121330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.122092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.122129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.122139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.122387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.122611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.122619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.124 [2024-07-25 10:18:06.122627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.124 [2024-07-25 10:18:06.126185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.124 [2024-07-25 10:18:06.135184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.124 [2024-07-25 10:18:06.135938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.124 [2024-07-25 10:18:06.135974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.124 [2024-07-25 10:18:06.135984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.124 [2024-07-25 10:18:06.136231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.124 [2024-07-25 10:18:06.136455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.124 [2024-07-25 10:18:06.136463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.136471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.140022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.149022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.149773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.149810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.149820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.150059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.150290] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.150300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.150307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.153860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.162866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.163541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.163578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.163590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.163833] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.164056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.164065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.164073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.167631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.176846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.177652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.177689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.177699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.177938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.178161] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.178170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.178177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.181735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.190732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.191532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.191569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.191579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.191818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.192041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.192049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.192057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.195615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.204615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.205440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.205477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.205487] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.205726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.205953] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.205961] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.205969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.209537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.218533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.219294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.219332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.219344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.219584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.219806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.219815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.219822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.223382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.232412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.233178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.233221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.233234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.233474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.233697] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.233705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.233713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.237267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.125 [2024-07-25 10:18:06.246265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.125 [2024-07-25 10:18:06.247069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.125 [2024-07-25 10:18:06.247106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.125 [2024-07-25 10:18:06.247117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.125 [2024-07-25 10:18:06.247364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.125 [2024-07-25 10:18:06.247588] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.125 [2024-07-25 10:18:06.247596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.125 [2024-07-25 10:18:06.247604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.125 [2024-07-25 10:18:06.251159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.386 [2024-07-25 10:18:06.260163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.386 [2024-07-25 10:18:06.260929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.386 [2024-07-25 10:18:06.260966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.386 [2024-07-25 10:18:06.260977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.386 [2024-07-25 10:18:06.261224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.386 [2024-07-25 10:18:06.261448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.386 [2024-07-25 10:18:06.261456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.386 [2024-07-25 10:18:06.261464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.386 [2024-07-25 10:18:06.265014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.386 [2024-07-25 10:18:06.274015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.386 [2024-07-25 10:18:06.274795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.386 [2024-07-25 10:18:06.274832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.386 [2024-07-25 10:18:06.274842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.386 [2024-07-25 10:18:06.275081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.386 [2024-07-25 10:18:06.275310] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.386 [2024-07-25 10:18:06.275320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.386 [2024-07-25 10:18:06.275327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.386 [2024-07-25 10:18:06.278880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.386 [2024-07-25 10:18:06.287877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.386 [2024-07-25 10:18:06.288639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.386 [2024-07-25 10:18:06.288676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.386 [2024-07-25 10:18:06.288686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.386 [2024-07-25 10:18:06.288925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.386 [2024-07-25 10:18:06.289147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.289156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.289163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.292724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.301721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.302530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.302566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.302582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.302821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.303043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.303052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.303059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.306617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.315625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.316483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.316520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.316531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.316769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.316991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.317000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.317008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.320568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.329574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.330390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.330426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.330436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.330675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.330898] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.330906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.330915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.334474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.343475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.344245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.344282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.344292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.344531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.344754] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.344766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.344774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.348334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.357357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.358164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.358209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.358221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.358462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.358685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.358693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.358700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.362251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.371246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.371998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.372035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.372046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.372294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.372518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.372527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.372534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.376084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.385082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.385894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.385931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.385941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.386180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.386411] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.386421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.386428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.389978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.398984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.399718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.399755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.399766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.400004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.400236] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.400245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.400252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.403801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.412807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.387 [2024-07-25 10:18:06.413578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.387 [2024-07-25 10:18:06.413615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.387 [2024-07-25 10:18:06.413625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.387 [2024-07-25 10:18:06.413864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.387 [2024-07-25 10:18:06.414087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.387 [2024-07-25 10:18:06.414095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.387 [2024-07-25 10:18:06.414103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.387 [2024-07-25 10:18:06.417663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.387 [2024-07-25 10:18:06.426663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.427473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.427510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.427521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.427759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.427982] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.427990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.427998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.431556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.388 [2024-07-25 10:18:06.440556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.441300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.441336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.441348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.441595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.441818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.441827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.441834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.445393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.388 [2024-07-25 10:18:06.454390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.455199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.455242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.455252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.455491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.455714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.455723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.455730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.459287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.388 [2024-07-25 10:18:06.468282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.468959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.468977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.468984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.469210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.469430] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.469438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.469445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.472989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.388 [2024-07-25 10:18:06.482190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.482910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.482947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.482957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.483196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.483429] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.483437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.483452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.487004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.388 [2024-07-25 10:18:06.496000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.496774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.496811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.496821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.497060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.497292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.497301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.497308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.500858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.388 [2024-07-25 10:18:06.509864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.388 [2024-07-25 10:18:06.510630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.388 [2024-07-25 10:18:06.510666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.388 [2024-07-25 10:18:06.510678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.388 [2024-07-25 10:18:06.510920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.388 [2024-07-25 10:18:06.511143] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.388 [2024-07-25 10:18:06.511151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.388 [2024-07-25 10:18:06.511159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.388 [2024-07-25 10:18:06.514717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.650 [2024-07-25 10:18:06.523718] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.650 [2024-07-25 10:18:06.524499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.650 [2024-07-25 10:18:06.524536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.650 [2024-07-25 10:18:06.524547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.650 [2024-07-25 10:18:06.524785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.650 [2024-07-25 10:18:06.525008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.650 [2024-07-25 10:18:06.525017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.650 [2024-07-25 10:18:06.525024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.650 [2024-07-25 10:18:06.528593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.650 [2024-07-25 10:18:06.537595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.650 [2024-07-25 10:18:06.538302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.650 [2024-07-25 10:18:06.538339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.650 [2024-07-25 10:18:06.538351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.650 [2024-07-25 10:18:06.538593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.650 [2024-07-25 10:18:06.538816] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.650 [2024-07-25 10:18:06.538825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.650 [2024-07-25 10:18:06.538833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.650 [2024-07-25 10:18:06.542394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.650 [2024-07-25 10:18:06.551599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.650 [2024-07-25 10:18:06.552280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.650 [2024-07-25 10:18:06.552317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.650 [2024-07-25 10:18:06.552328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.650 [2024-07-25 10:18:06.552567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.552790] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.552798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.552806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.556366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.565572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.566290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.566327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.566338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.566576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.566799] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.566808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.566816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.570375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.579375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.580160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.580197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.580216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.580455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.580682] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.580690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.580698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.584253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.593248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.593833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.593870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.593881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.594120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.594352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.594361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.594368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.597919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.607123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.607933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.607970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.607980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.608228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.608452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.608460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.608468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.612028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.621027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.621845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.621883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.621894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.622132] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.622364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.622374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.622381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.625935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.634939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.635744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.635780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.635791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.636029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.636261] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.636270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.636278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.639830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.648832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.649507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.649543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.649555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.649794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.650016] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.650024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.650031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.653592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.662797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.663569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.663606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.663616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.663855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.664078] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.664087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.664094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.667655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.676657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.677261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.677298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.677315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.677555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.677778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.651 [2024-07-25 10:18:06.677786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.651 [2024-07-25 10:18:06.677793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.651 [2024-07-25 10:18:06.681356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.651 [2024-07-25 10:18:06.690559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.651 [2024-07-25 10:18:06.691279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.651 [2024-07-25 10:18:06.691316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.651 [2024-07-25 10:18:06.691326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.651 [2024-07-25 10:18:06.691565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.651 [2024-07-25 10:18:06.691787] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.691796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.691804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 [2024-07-25 10:18:06.695360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.652 [2024-07-25 10:18:06.704358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.652 [2024-07-25 10:18:06.705164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.652 [2024-07-25 10:18:06.705209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.652 [2024-07-25 10:18:06.705222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.652 [2024-07-25 10:18:06.705462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.652 [2024-07-25 10:18:06.705684] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.705693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.705700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 [2024-07-25 10:18:06.709265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.652 [2024-07-25 10:18:06.718259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.652 [2024-07-25 10:18:06.718861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.652 [2024-07-25 10:18:06.718897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.652 [2024-07-25 10:18:06.718907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.652 [2024-07-25 10:18:06.719146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.652 [2024-07-25 10:18:06.719379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.719393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.719400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 [2024-07-25 10:18:06.722951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.652 [2024-07-25 10:18:06.732163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.652 [2024-07-25 10:18:06.732954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.652 [2024-07-25 10:18:06.732991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.652 [2024-07-25 10:18:06.733002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.652 [2024-07-25 10:18:06.733249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.652 [2024-07-25 10:18:06.733473] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.733482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.733489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 [2024-07-25 10:18:06.737040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.652 [2024-07-25 10:18:06.746049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.652 [2024-07-25 10:18:06.746857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.652 [2024-07-25 10:18:06.746893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.652 [2024-07-25 10:18:06.746904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.652 [2024-07-25 10:18:06.747143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.652 [2024-07-25 10:18:06.747374] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.747384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.747391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1472142 Killed "${NVMF_APP[@]}" "$@" 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.652 [2024-07-25 10:18:06.750944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1473878 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1473878 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 1473878 ']' 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:27.652 [2024-07-25 10:18:06.759945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:27.652 10:18:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:27.652 [2024-07-25 10:18:06.760730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.652 [2024-07-25 10:18:06.760767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.652 [2024-07-25 10:18:06.760779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.652 [2024-07-25 10:18:06.761018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.652 [2024-07-25 10:18:06.761249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.761258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.761266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 [2024-07-25 10:18:06.764820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.652 [2024-07-25 10:18:06.773822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.652 [2024-07-25 10:18:06.774517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.652 [2024-07-25 10:18:06.774555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.652 [2024-07-25 10:18:06.774565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.652 [2024-07-25 10:18:06.774804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.652 [2024-07-25 10:18:06.775027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.652 [2024-07-25 10:18:06.775035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.652 [2024-07-25 10:18:06.775042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.652 [2024-07-25 10:18:06.778601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.914 [2024-07-25 10:18:06.787816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.914 [2024-07-25 10:18:06.788589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.914 [2024-07-25 10:18:06.788626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.914 [2024-07-25 10:18:06.788637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.914 [2024-07-25 10:18:06.788876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.914 [2024-07-25 10:18:06.789099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.914 [2024-07-25 10:18:06.789108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.914 [2024-07-25 10:18:06.789116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.914 [2024-07-25 10:18:06.792678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.914 [2024-07-25 10:18:06.801708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.914 [2024-07-25 10:18:06.802364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.914 [2024-07-25 10:18:06.802401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.914 [2024-07-25 10:18:06.802412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.914 [2024-07-25 10:18:06.802651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.914 [2024-07-25 10:18:06.802874] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.914 [2024-07-25 10:18:06.802882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.914 [2024-07-25 10:18:06.802890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.914 [2024-07-25 10:18:06.806446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.914 [2024-07-25 10:18:06.815664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.914 [2024-07-25 10:18:06.816315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.914 [2024-07-25 10:18:06.816352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.914 [2024-07-25 10:18:06.816364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.914 [2024-07-25 10:18:06.816604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.914 [2024-07-25 10:18:06.816826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.816835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.816843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.819517] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:27.915 [2024-07-25 10:18:06.819573] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.915 [2024-07-25 10:18:06.820401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.829616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.830271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.830308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.830321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.830562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.830785] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.830793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.830801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.834362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.843575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.844288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.844326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.844338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.844580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.844803] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.844813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.844820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.848382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.915 [2024-07-25 10:18:06.857388] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.858192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.858237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.858249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.858492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.858715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.858724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.858732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.862287] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.871295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.871944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.871981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.871992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.872238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.872462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.872471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.872478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.876032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.885101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.885866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.885902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.885918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.886157] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.886387] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.886397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.886404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.889954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.898955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.899664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.899701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.899713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.899951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.900174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.900183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.900190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.902395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:27.915 [2024-07-25 10:18:06.903752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.912766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.913474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.913494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.913502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.913722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.913942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.913950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.913957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.917508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.926713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.927555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.927593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.927604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.927845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.928068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.928082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.928090] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.931666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.915 [2024-07-25 10:18:06.940671] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.915 [2024-07-25 10:18:06.941528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.915 [2024-07-25 10:18:06.941565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.915 [2024-07-25 10:18:06.941576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.915 [2024-07-25 10:18:06.941816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.915 [2024-07-25 10:18:06.942038] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.915 [2024-07-25 10:18:06.942048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.915 [2024-07-25 10:18:06.942055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.915 [2024-07-25 10:18:06.945613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:06.954617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:06.955447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:06.955484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:06.955496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:06.955740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:06.955963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:06.955972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:06.955980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:06.956118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.916 [2024-07-25 10:18:06.956139] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.916 [2024-07-25 10:18:06.956146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.916 [2024-07-25 10:18:06.956152] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.916 [2024-07-25 10:18:06.956157] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.916 [2024-07-25 10:18:06.956218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.916 [2024-07-25 10:18:06.956762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.916 [2024-07-25 10:18:06.956834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.916 [2024-07-25 10:18:06.959544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:06.968549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:06.969316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:06.969354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:06.969372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:06.969616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:06.969839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:06.969848] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:06.969856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:06.973415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:06.982421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:06.982980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:06.982999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:06.983008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:06.983236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:06.983459] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:06.983467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:06.983475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:06.987022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:06.996234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:06.997019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:06.997058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:06.997068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:06.997316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:06.997540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:06.997549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:06.997556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:07.001107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:07.010142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:07.010807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:07.010844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:07.010856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:07.011095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:07.011324] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:07.011339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:07.011348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:07.014898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:07.024123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:07.024811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:07.024833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:07.024841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:07.025063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:07.025287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:07.025296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:07.025304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:07.028860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:27.916 [2024-07-25 10:18:07.038064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:27.916 [2024-07-25 10:18:07.038749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:27.916 [2024-07-25 10:18:07.038765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:27.916 [2024-07-25 10:18:07.038773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:27.916 [2024-07-25 10:18:07.038992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:27.916 [2024-07-25 10:18:07.039216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:27.916 [2024-07-25 10:18:07.039225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:27.916 [2024-07-25 10:18:07.039232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:27.916 [2024-07-25 10:18:07.042777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.051981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.052736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.052774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.052785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.053024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.053254] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.053263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.053271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.056825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.065828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.066627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.066664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.066675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.066914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.067137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.067146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.067154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.070713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.079714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.080516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.080553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.080564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.080804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.081026] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.081034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.081042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.084599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.093604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.094304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.094341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.094352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.094591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.094814] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.094822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.094830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.098388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.107600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.108426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.108463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.108474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.108718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.108941] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.108950] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.108957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.112525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.121555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.122414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.122452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.122462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.122701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.122924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.122932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.122940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.126498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.135506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.136299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.136336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.136348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.136591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.136813] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.136822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.136830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.140389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.149393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.150187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.150232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.150243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.179 [2024-07-25 10:18:07.150482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.179 [2024-07-25 10:18:07.150705] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.179 [2024-07-25 10:18:07.150713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.179 [2024-07-25 10:18:07.150725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.179 [2024-07-25 10:18:07.154281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.179 [2024-07-25 10:18:07.163288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.179 [2024-07-25 10:18:07.164079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.179 [2024-07-25 10:18:07.164116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.179 [2024-07-25 10:18:07.164126] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.164373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.164597] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.164606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.164613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.168162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.177167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.177994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.178032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.178043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.178289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.178513] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.178523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.178530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.182081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.191085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.191896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.191933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.191944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.192183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.192414] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.192424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.192432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.195984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.204988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.205790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.205835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.205847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.206085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.206316] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.206325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.206332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.209894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.218899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.219607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.219644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.219655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.219894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.220117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.220125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.220133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.223690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.232701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.233522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.233559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.233570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.233809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.234032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.234040] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.234048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.237601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.246601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.247447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.247484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.247496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.247738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.247966] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.247975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.247982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.251542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.260541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.261444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.261482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.261493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.261731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.261954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.261963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.261971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.265533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.274541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.275326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.275363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.275374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.275614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.275836] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.275846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.275854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.279413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.288417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.289246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.289284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.289296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.289539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.180 [2024-07-25 10:18:07.289762] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.180 [2024-07-25 10:18:07.289771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.180 [2024-07-25 10:18:07.289779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.180 [2024-07-25 10:18:07.293341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.180 [2024-07-25 10:18:07.302341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.180 [2024-07-25 10:18:07.303182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.180 [2024-07-25 10:18:07.303226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.180 [2024-07-25 10:18:07.303239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.180 [2024-07-25 10:18:07.303480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.181 [2024-07-25 10:18:07.303703] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.181 [2024-07-25 10:18:07.303711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.181 [2024-07-25 10:18:07.303719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.181 [2024-07-25 10:18:07.307274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.316288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.317120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.317157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.317169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.317420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.317643] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.317652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.317660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.321215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.330227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.330978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.331015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.331026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.331271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.331496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.331504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.331511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.335062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.344059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.344702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.344721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.344734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.344955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.345174] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.345182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.345189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.348745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.357948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.358688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.358725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.358736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.358975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.359198] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.359213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.359220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.362775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.371780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.372297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.372334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.372346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.372588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.372811] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.372820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.372827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.376385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.385595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.386414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.386452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.386462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.386701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.386924] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.386937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.386944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.390503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.399505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.400247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.400273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.400282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.400507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.400728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.400737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.400744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.404297] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.413304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.414126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.414162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.414174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.414424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.414648] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.414657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.414665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.418216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.427217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.427908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.427927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.427935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.428154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.428379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.428387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.428394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.431947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.444 [2024-07-25 10:18:07.441157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.444 [2024-07-25 10:18:07.441882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.444 [2024-07-25 10:18:07.441899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.444 [2024-07-25 10:18:07.441906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.444 [2024-07-25 10:18:07.442125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.444 [2024-07-25 10:18:07.442349] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.444 [2024-07-25 10:18:07.442358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.444 [2024-07-25 10:18:07.442366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.444 [2024-07-25 10:18:07.445910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.455113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.455666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.455703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.455714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.455953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.456176] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.456184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.456192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.459751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.468963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.469647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.469684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.469695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.469934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.470157] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.470165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.470172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.473740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.482954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.483777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.483815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.483826] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.484069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.484299] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.484309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.484316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.487868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.496869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.497658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.497695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.497706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.497946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.498169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.498177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.498185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.501744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.510757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.511502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.511539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.511551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.511790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.512013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.512022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.512029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.515588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.524591] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.525450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.525487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.525498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.525738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.525960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.525970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.525982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.529544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.538558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.539062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.539085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.539093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.539323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.539543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.539552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.539558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.543103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.552520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.553261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.553285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.553293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.553517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.553737] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.553745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.553752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.557303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.445 [2024-07-25 10:18:07.566506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.445 [2024-07-25 10:18:07.567089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.445 [2024-07-25 10:18:07.567105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.445 [2024-07-25 10:18:07.567112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.445 [2024-07-25 10:18:07.567336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.445 [2024-07-25 10:18:07.567556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.445 [2024-07-25 10:18:07.567564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.445 [2024-07-25 10:18:07.567571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.445 [2024-07-25 10:18:07.571112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 [2024-07-25 10:18:07.580317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.581104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.581141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.581153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.581404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.581628] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.581636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.581644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 [2024-07-25 10:18:07.585194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.708 [2024-07-25 10:18:07.594193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.594886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.594905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.594913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.595134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.595359] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.595368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.595374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 [2024-07-25 10:18:07.598918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 [2024-07-25 10:18:07.608123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.608910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.608947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.608959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.609198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.609440] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.609450] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.609457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 [2024-07-25 10:18:07.613012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 [2024-07-25 10:18:07.622012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.622749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.622775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.622995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.623219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.623228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.623235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 [2024-07-25 10:18:07.626783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.708 [2024-07-25 10:18:07.635316] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.708 [2024-07-25 10:18:07.635992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.636478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.636494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.636501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.636721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.636939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.636947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.636954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 [2024-07-25 10:18:07.640503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.708 [2024-07-25 10:18:07.649911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.650689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.650727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.650738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.650977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.651207] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.651216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.651229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 [2024-07-25 10:18:07.654781] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 [2024-07-25 10:18:07.663782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.664465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.664502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.664513] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 [2024-07-25 10:18:07.664753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.664976] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.664984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.664992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 Malloc0 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.708 [2024-07-25 10:18:07.668548] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.708 [2024-07-25 10:18:07.677752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.708 [2024-07-25 10:18:07.678539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.708 [2024-07-25 10:18:07.678576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.708 [2024-07-25 10:18:07.678586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.708 [2024-07-25 10:18:07.678825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.708 [2024-07-25 10:18:07.679048] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.708 [2024-07-25 10:18:07.679057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.708 [2024-07-25 10:18:07.679064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.708 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.708 [2024-07-25 10:18:07.682625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.709 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.709 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.709 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:28.709 [2024-07-25 10:18:07.691621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.709 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:28.709 [2024-07-25 10:18:07.692209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.709 [2024-07-25 10:18:07.692245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa193b0 with addr=10.0.0.2, port=4420 00:29:28.709 [2024-07-25 10:18:07.692257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa193b0 is same with the state(5) to be set 00:29:28.709 [2024-07-25 10:18:07.692500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa193b0 (9): Bad file descriptor 00:29:28.709 [2024-07-25 10:18:07.692723] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:28.709 [2024-07-25 10:18:07.692732] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:28.709 [2024-07-25 10:18:07.692739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:28.709 [2024-07-25 10:18:07.696290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:28.709 [2024-07-25 10:18:07.697883] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.709 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:28.709 10:18:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1472793 00:29:28.709 [2024-07-25 10:18:07.705494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:28.969 [2024-07-25 10:18:07.913856] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:38.968 00:29:38.969 Latency(us) 00:29:38.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.969 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:38.969 Verification LBA range: start 0x0 length 0x4000 00:29:38.969 Nvme1n1 : 15.00 8541.48 33.37 10071.31 0.00 6850.93 1078.61 15291.73 00:29:38.969 =================================================================================================================== 00:29:38.969 Total : 8541.48 33.37 10071.31 0.00 6850.93 1078.61 15291.73 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:38.969 rmmod nvme_tcp 00:29:38.969 rmmod nvme_fabrics 00:29:38.969 rmmod nvme_keyring 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1473878 ']' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1473878 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 1473878 ']' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 1473878 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1473878 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1473878' 00:29:38.969 killing process with pid 1473878 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 1473878 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 1473878 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.969 10:18:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:39.913 00:29:39.913 real 0m27.690s 00:29:39.913 user 1m3.007s 00:29:39.913 sys 0m7.016s 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:39.913 ************************************ 00:29:39.913 END TEST nvmf_bdevperf 00:29:39.913 ************************************ 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:39.913 ************************************ 00:29:39.913 START TEST nvmf_target_disconnect 00:29:39.913 ************************************ 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:39.913 * Looking for test storage... 00:29:39.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:39.913 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:29:39.914 10:18:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.061 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:48.062 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:48.062 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:48.062 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:48.062 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.062 10:18:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:48.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.629 ms 00:29:48.062 00:29:48.062 --- 10.0.0.2 ping statistics --- 00:29:48.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.062 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:29:48.062 00:29:48.062 --- 10.0.0.1 ping statistics --- 00:29:48.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.062 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:48.062 ************************************ 00:29:48.062 START TEST nvmf_target_disconnect_tc1 00:29:48.062 ************************************ 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:48.062 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.063 [2024-07-25 10:18:26.200668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.063 [2024-07-25 10:18:26.200731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11f2e20 with addr=10.0.0.2, port=4420 00:29:48.063 [2024-07-25 10:18:26.200761] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:48.063 [2024-07-25 10:18:26.200770] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:48.063 [2024-07-25 10:18:26.200777] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:48.063 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:48.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:48.063 Initializing NVMe Controllers 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:48.063 00:29:48.063 real 0m0.114s 00:29:48.063 user 0m0.045s 00:29:48.063 sys 0m0.070s 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:48.063 ************************************ 00:29:48.063 END TEST nvmf_target_disconnect_tc1 00:29:48.063 ************************************ 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:48.063 ************************************ 00:29:48.063 START TEST nvmf_target_disconnect_tc2 00:29:48.063 ************************************ 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1480370 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1480370 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1480370 ']' 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:48.063 10:18:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.063 [2024-07-25 10:18:26.363380] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:48.063 [2024-07-25 10:18:26.363439] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.063 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.063 [2024-07-25 10:18:26.449153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.063 [2024-07-25 10:18:26.543721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.063 [2024-07-25 10:18:26.543770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.063 [2024-07-25 10:18:26.543778] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.063 [2024-07-25 10:18:26.543785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.063 [2024-07-25 10:18:26.543791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.063 [2024-07-25 10:18:26.544458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:48.063 [2024-07-25 10:18:26.544680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:48.063 [2024-07-25 10:18:26.544889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:48.063 [2024-07-25 10:18:26.544890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.063 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:48.063 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:48.063 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:48.063 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.063 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.324 Malloc0 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.324 [2024-07-25 10:18:27.228852] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.324 [2024-07-25 10:18:27.257192] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.324 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1480716 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:48.325 10:18:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:48.325 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.238 10:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1480370 00:29:50.238 10:18:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Write completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.238 Read completed with error (sct=0, sc=8) 00:29:50.238 starting I/O failed 00:29:50.239 Read completed with error (sct=0, sc=8) 00:29:50.239 starting I/O failed 00:29:50.239 Write completed with error (sct=0, sc=8) 00:29:50.239 starting I/O failed 00:29:50.239 Read completed with error (sct=0, sc=8) 00:29:50.239 starting I/O failed 00:29:50.239 Write completed with error (sct=0, sc=8) 00:29:50.239 starting I/O failed 00:29:50.239 Write completed with error (sct=0, sc=8) 00:29:50.239 starting I/O failed 00:29:50.239 Write completed with error (sct=0, sc=8) 00:29:50.239 starting I/O failed 00:29:50.239 [2024-07-25 10:18:29.283895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.239 [2024-07-25 10:18:29.284392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.284409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.284885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.284893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.285360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.285367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.285801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.285808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.286026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.286040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.286428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.286436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.286910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.286917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.287445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.287463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.287915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.287922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.288391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.288419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.288894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.288903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.289484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.289512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.289954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.289963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.290431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.290459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.290900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.290909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.291408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.291436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.291877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.291886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.292359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.292367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.292837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.292845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.293181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.293189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.293644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.293652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.294122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.294129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.294579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.294606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.295083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.295092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.295528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.295536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.296013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.296020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.296559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.296587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.296900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.296910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.297484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.297512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.297984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.297993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.298558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.298584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.299051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.299060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.299520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.299552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.239 [2024-07-25 10:18:29.300023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.239 [2024-07-25 10:18:29.300032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.239 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.300506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.300534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.300885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.300894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.301413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.301440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.301902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.301912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.302506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.302533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.303003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.303011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.303572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.303599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.303943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.303951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.304511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.304538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.305008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.305016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.305568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.305595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.305945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.305953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.306467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.306494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.306841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.306850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.307231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.307238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.307573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.307579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.307957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.307964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.308384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.308390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.308858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.308865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.309237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.309244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.309686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.309693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.310126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.310133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.310611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.310618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.311044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.311051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.311579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.311607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.312016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.312024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.312573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.312600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.312949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.312957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.313532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.313559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.313997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.314006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.314558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.314584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.315023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.315032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.315572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.315599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.315964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.315972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.316508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.316535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.316987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.316995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.317494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.240 [2024-07-25 10:18:29.317521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.240 qpair failed and we were unable to recover it. 00:29:50.240 [2024-07-25 10:18:29.317958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.317967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.318528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.318559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.319018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.319027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.319591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.319618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.319958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.319967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.320494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.320521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.320831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.320841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.321315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.321323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.321736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.321742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.322180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.322186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.322593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.322600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.323043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.323050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.323594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.323622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.324122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.324130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.324698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.324725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.325225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.325242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.325728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.325735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.326175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.326182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.326653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.326660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.326917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.326924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.327527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.327554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.327774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.327784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.328256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.328264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.328718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.328725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.329124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.329130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.329572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.329580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.329985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.329993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.330426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.330433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.330700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.330707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.331125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.331131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.331515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.331522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.331991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.331998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.332385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.332392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.332737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.332743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.333159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.333167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.333630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.333637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.334040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.334046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.241 qpair failed and we were unable to recover it. 00:29:50.241 [2024-07-25 10:18:29.334575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.241 [2024-07-25 10:18:29.334602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.335110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.335119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.335530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.335538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.335962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.335968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.336493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.336523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.336928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.336937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.337478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.337504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.337971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.337980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.338514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.338541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.338988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.338996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.339450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.339478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.339975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.339983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.340418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.340445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.340947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.340955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.341470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.341498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.341941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.341950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.342423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.342450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.342895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.342904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.343430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.343458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.343898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.343907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.344238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.344245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.344687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.344694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.345128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.345134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.345351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.345363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.345810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.345817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.346253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.346260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.346426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.346434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.346929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.346935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.347440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.347447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.347644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.347652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.348106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.348113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.348644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.348651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.349072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.349078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.349508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.349515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.349975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.349982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.350495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.350523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.350973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.350981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.351543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.242 [2024-07-25 10:18:29.351569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.242 qpair failed and we were unable to recover it. 00:29:50.242 [2024-07-25 10:18:29.352018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.352028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.352565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.352592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.352940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.352949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.353493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.353520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.353981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.353989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.354518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.354545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.354981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.354992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.355427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.355453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.355766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.355776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.356134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.356141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.356594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.356603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.356948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.356955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.357525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.357552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.357999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.358008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.358535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.358562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.359000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.359008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.359449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.359476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.359943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.359952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.360501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.360528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.360954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.360962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.361498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.361526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.361959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.361967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.362504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.362531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.362966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.362974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.363492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.363519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.363955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.363964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.364505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.364532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.364989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.364998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.365515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.365542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.366007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.243 [2024-07-25 10:18:29.366015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.243 qpair failed and we were unable to recover it. 00:29:50.243 [2024-07-25 10:18:29.366555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.244 [2024-07-25 10:18:29.366582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.244 qpair failed and we were unable to recover it. 00:29:50.244 [2024-07-25 10:18:29.367019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.244 [2024-07-25 10:18:29.367027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.244 qpair failed and we were unable to recover it. 00:29:50.244 [2024-07-25 10:18:29.367549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.244 [2024-07-25 10:18:29.367576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.244 qpair failed and we were unable to recover it. 00:29:50.244 [2024-07-25 10:18:29.368057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.244 [2024-07-25 10:18:29.368066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.244 qpair failed and we were unable to recover it. 00:29:50.244 [2024-07-25 10:18:29.368610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.244 [2024-07-25 10:18:29.368638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.244 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.369102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.369111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.369688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.369715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.370048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.370058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.370616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.370643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.371089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.371098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.371564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.371571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.372035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.372042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.372595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.372622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.373056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.373064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.513 [2024-07-25 10:18:29.373583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.513 [2024-07-25 10:18:29.373610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.513 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.374049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.374057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.374579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.374611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.375044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.375053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.375496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.375523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.375960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.375968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.376484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.376511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.376946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.376955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.377493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.377520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.377983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.377992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.378511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.378537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.378872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.378881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.379454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.379482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.379830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.379838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.380266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.380274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.380704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.380711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.381135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.381142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.381568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.381575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.381999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.382005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.382450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.382457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.382900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.382907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.383422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.383450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.383895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.383904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.384462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.384488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.385032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.514 [2024-07-25 10:18:29.385040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.514 qpair failed and we were unable to recover it. 00:29:50.514 [2024-07-25 10:18:29.385559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.385586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.386024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.386032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.386554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.386581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.386801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.386813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.387272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.387280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.387714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.388186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.388192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.388602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.388610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.389070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.389077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.389591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.389619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.390119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.390127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.390612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.390639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.391069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.391078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.391602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.391630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.392099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.392108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.392564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.392571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.392997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.393005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.393561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.393591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.394024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.394032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.394622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.394649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.395087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.395095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.395690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.395717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.396361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.396388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.396892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.396900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.397419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.515 [2024-07-25 10:18:29.397446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.515 qpair failed and we were unable to recover it. 00:29:50.515 [2024-07-25 10:18:29.397885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.397893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.398320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.398327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.398749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.398755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.399194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.399204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.399697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.399704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.400126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.400133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.400670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.400696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.401117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.401126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.401581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.401589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.402031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.402038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.402557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.402584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.403026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.403034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.403557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.403584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.404048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.404056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.404597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.404624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.405064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.405072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.405480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.405506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.405898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.405907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.406532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.406559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.407024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.407033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.407548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.407576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.408019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.408028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.408596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.408624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.409070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.409078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.409593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.409620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.410057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.410067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.410607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.410635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.411088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.411097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.411543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.411570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.412018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.412026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.516 qpair failed and we were unable to recover it. 00:29:50.516 [2024-07-25 10:18:29.412558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.516 [2024-07-25 10:18:29.412585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.413017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.413025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.413474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.413504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.413841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.413850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.414404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.414431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.414773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.414782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.415236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.415243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.415740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.415746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.416170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.416178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.416631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.416639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.416996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.417003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.417548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.417576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.417883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.417893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.418399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.418407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.418849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.418856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.419299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.419306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.419733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.419739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.420166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.420173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.420630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.420637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.421063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.421069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.421582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.421610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.422047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.422055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.422572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.422599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.423050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.423060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.423598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.423624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.424141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.424149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.424674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.424701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.425140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.425149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.425675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.425702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.426143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.426151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.426648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.426675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.427154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.427164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.427704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.427731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.428164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.428173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.428730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.428757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.429193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.429216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.517 qpair failed and we were unable to recover it. 00:29:50.517 [2024-07-25 10:18:29.429749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.517 [2024-07-25 10:18:29.429776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.430361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.430388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.430829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.430838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.431381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.431407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.431886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.431894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.432215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.432222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.432529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.432540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.432961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.432967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.433397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.433404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.433828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.433835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.434298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.434305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.434766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.434773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.435195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.435206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.435652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.436126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.436134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.436387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.436394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.436838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.436845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.437301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.437308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.437517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.437531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.437998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.438005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.438302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.438317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.438651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.438657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.439081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.439087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.439597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.439604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.440027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.440034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.440459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.440466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.440928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.440934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.441450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.441478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.441946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.441955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.442498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.442525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.442965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.442973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.443495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.443522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.443957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.443965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.444523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.444551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.444985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.444993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.518 [2024-07-25 10:18:29.445545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.518 [2024-07-25 10:18:29.445572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.518 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.446107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.446115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.446559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.446567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.446986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.446993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.447512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.447539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.448005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.448013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.448576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.448603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.449050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.449059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.449607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.449634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.450074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.450082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.450605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.450632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.451099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.451110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.451684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.451711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.452038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.452047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.452610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.452636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.453096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.453105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.453684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.453711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.454152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.454160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.454681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.454709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.455153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.455161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.455685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.455712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.456152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.456160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.456587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.456594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.457018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.457026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.457567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.457594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.458117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.458126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.458643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.458670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.459113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.519 [2024-07-25 10:18:29.459122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.519 qpair failed and we were unable to recover it. 00:29:50.519 [2024-07-25 10:18:29.459563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.459571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.460002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.460008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.460525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.460552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.460991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.460999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.461594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.461622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.462063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.462071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.462474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.462500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.462995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.463004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.463546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.463573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.464021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.464030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.464583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.464610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.464961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.464969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.465404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.465431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.465902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.465911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.466362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.466390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.466842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.466850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.467186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.467192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.467629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.467636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.468064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.468071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.468470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.468498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.468940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.468950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.469489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.469516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.469954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.469962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.470492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.470522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.470864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.470872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.471427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.471454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.471896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.471904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.472327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.472335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.472782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.472789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.473221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.473227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.473663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.473669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.474136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.474143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.474450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.474458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.474934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.474941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.475355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.475362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.520 [2024-07-25 10:18:29.475786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.520 [2024-07-25 10:18:29.475793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.520 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.476256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.476263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.476728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.476735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.477193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.477203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.477624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.477631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.478092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.478099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.478558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.478566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.479005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.479012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.479541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.479568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.480007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.480015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.480530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.480558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.481028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.481037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.481573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.481601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.482042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.482051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.482573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.482600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.483042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.483051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.483498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.483525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.483958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.483967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.484504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.484531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.484968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.484977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.485586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.485612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.486064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.486072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.486653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.486681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.487146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.487154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.487670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.487698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.488042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.488050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.488608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.488635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.489163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.489171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.489727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.489757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.490192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.490207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.490716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.490743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.491180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.491188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.491746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.491773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.492209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.492219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.492764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.521 [2024-07-25 10:18:29.492791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.521 qpair failed and we were unable to recover it. 00:29:50.521 [2024-07-25 10:18:29.493263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.493272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.493719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.493726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.493805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.493816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.494275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.494282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.494723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.494729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.495196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.495206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.495724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.495730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.496174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.496181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.496606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.496613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.497041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.497047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.497267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.497282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.497719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.497726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.498181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.498189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.498660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.498667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.499114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.499121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.499542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.499569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.500019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.500028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.500553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.500580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.501019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.501028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.501554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.501581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.502017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.502026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.502546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.502573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.503004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.503012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.503522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.503549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.503985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.503993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.504520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.504547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.504988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.504997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.505537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.505564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.506076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.506084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.506524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.506551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.506993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.507002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.507513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.507540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.507973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.507982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.508521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.508552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.509017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.509026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.509553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.522 [2024-07-25 10:18:29.509580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.522 qpair failed and we were unable to recover it. 00:29:50.522 [2024-07-25 10:18:29.510026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.510034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.510591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.510618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.511066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.511074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.511631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.511658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.512101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.512109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.512688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.512715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.513157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.513165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.513734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.513762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.514212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.514221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.514842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.514869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.515428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.515455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.515899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.515909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.516444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.516471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.516940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.516948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.517159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.517169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.517614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.517622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.518047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.518054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.518668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.518696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.519050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.519058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.519633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.519660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.520102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.520111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.520647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.520674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.521150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.521159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.521758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.521785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.522368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.522398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.522835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.522843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.523168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.523174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.523637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.523645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.524122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.524128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.524645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.524672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.525055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.525064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.525631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.525658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.526094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.526103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.526550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.526577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.527046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.527055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.527609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.527635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.528133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.523 [2024-07-25 10:18:29.528142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.523 qpair failed and we were unable to recover it. 00:29:50.523 [2024-07-25 10:18:29.528698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.528725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.529162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.529171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.529697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.530162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.530170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.530700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.530727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.531155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.531164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.531762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.531789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.532348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.532375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.532819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.532828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.533365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.533392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.533831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.533840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.534275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.534282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.534747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.534754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.535232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.535239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.535687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.535693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.536126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.536132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.536613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.536620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.537003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.537010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.537496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.537503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.537946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.537952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.538504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.538530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.539019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.539028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.539621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.539650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.540095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.540103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.540597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.540605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.541050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.541056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.541501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.541528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.542044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.542056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.542522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.542555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.542924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.524 [2024-07-25 10:18:29.542933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.524 qpair failed and we were unable to recover it. 00:29:50.524 [2024-07-25 10:18:29.543509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.543536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.544060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.544068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.544588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.544615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.545082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.545091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.545694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.545721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.546114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.546123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.546572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.546580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.547026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.547033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.547589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.547615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.548055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.548063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.548583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.548610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.549059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.549067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.549489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.549516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.549957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.549965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.550490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.550517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.550847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.550856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.551415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.551443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.551658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.551668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.552105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.552112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.552567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.552574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.553006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.553012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.553445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.553452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.553782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.553789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.554232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.554239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.554688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.554694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.555125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.555132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.555662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.555669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.556088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.556095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.556432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.556439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.556872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.556880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.557336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.557343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.557792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.557799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.558151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.558158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.558600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.558606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.559070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.559077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.559697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.559724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.525 qpair failed and we were unable to recover it. 00:29:50.525 [2024-07-25 10:18:29.560409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.525 [2024-07-25 10:18:29.560436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.560871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.560883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.561419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.561446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.561854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.561863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.562312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.562320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.562782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.562789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.563097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.563105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.563562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.563569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.563990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.563997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.564424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.564431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.564896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.564902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.565422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.565449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.565949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.565958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.566424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.566451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.566809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.566817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.567374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.567382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.567829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.567837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.568306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.568313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.568684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.568692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.568922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.568933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.569389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.569397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.569648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.569654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.570007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.570014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.570534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.570541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.570988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.570995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.571450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.571457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.571920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.571927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.572474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.572501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.572948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.572957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.573422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.573449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.573953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.573962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.574205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.574212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.574682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.574688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.575140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.575147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.575585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.575612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.576154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.576162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.576568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.526 [2024-07-25 10:18:29.576594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.526 qpair failed and we were unable to recover it. 00:29:50.526 [2024-07-25 10:18:29.577054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.577063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.577627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.577654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.578112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.578121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.578557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.578583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.579017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.579028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.579637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.579664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.580113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.580121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.580562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.580569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.581008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.581015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.581593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.581619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.582061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.582070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.582586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.582613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.583112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.583121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.583645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.583673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.584129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.584138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.584620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.584648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.585104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.585113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.585561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.585569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.586018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.586026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.586565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.586593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.587056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.587065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.587590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.587616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.588053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.588062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.588603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.588630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.589097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.589106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.589683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.589710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.590149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.590157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.590700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.590728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.591360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.591387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.591855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.591863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.592188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.592195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.592731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.592738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.593207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.593216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.593739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.593766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.594380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.594407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.527 [2024-07-25 10:18:29.594846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.527 [2024-07-25 10:18:29.594854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.527 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.595392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.595419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.595857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.595866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.596406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.596433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.596892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.596901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.597371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.597379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.597807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.597813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.598253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.598260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.598691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.598698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.599120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.599130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.599621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.599628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.600046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.600052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.600557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.600584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.601020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.601029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.601559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.601586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.602045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.602055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.602590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.602617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.602961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.602970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.603504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.603531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.603956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.603964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.604476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.604503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.604721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.604732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.605190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.605197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.605676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.605683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.606091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.606098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.606624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.528 [2024-07-25 10:18:29.606632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.528 qpair failed and we were unable to recover it. 00:29:50.528 [2024-07-25 10:18:29.607070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.607076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.607607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.607634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.607985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.607994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.608503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.608530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.608977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.608986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.609631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.609658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.610107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.610115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.610651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.610678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.611151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.611160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.611601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.611609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.612097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.612103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.612683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.612710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.613175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.613184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.613733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.613760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.614208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.614217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.614756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.614783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.529 [2024-07-25 10:18:29.615362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.529 [2024-07-25 10:18:29.615389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.529 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.615890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.615899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.616458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.616485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.616954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.616963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.617523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.617550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.617987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.617995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.618532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.618559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.619023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.619034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.619622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.619649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.620086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.620094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.620561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.620569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.620996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.621003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.621542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.621569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.622003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.622012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.622577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.622603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.623068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.623077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.623597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.623624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.624060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.624068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.624587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.624614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.625088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.625097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.625503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.625530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.625973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.625983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.626528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.626555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.626770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.626781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.627247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.530 [2024-07-25 10:18:29.627255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.530 qpair failed and we were unable to recover it. 00:29:50.530 [2024-07-25 10:18:29.627687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.627693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.628119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.628127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.628564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.628571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.629039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.629046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.629494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.629502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.629978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.629984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.630212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.630224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.630663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.630670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.631111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.631117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.631644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.631670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.632140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.632148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.632686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.632712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.633179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.633188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.633726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.633752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.634226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.634243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.634705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.634711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.635157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.635164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.635597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.635604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.636066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.636073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.636612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.636638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.637096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.637105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.637621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.637647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.531 [2024-07-25 10:18:29.638095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.531 [2024-07-25 10:18:29.638108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.531 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.638552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.638562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.638992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.638999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.639539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.639566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.640028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.640037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.640591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.640618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.641077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.641085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.641526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.641533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.642007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.642014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.642566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.642592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.643044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.643052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.643594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.643620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.644090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.644099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.644648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.644674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.645129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.645138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.645587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.645594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.646061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.646067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.646605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.646631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.647084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.647092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.795 qpair failed and we were unable to recover it. 00:29:50.795 [2024-07-25 10:18:29.647639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.795 [2024-07-25 10:18:29.647665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.648142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.648151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.648612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.648619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.649064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.649071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.649512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.649538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.650011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.650019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.650575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.650601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.651066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.651075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.651514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.651522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.651988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.651994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.652555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.652581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.653040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.653048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.653524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.653550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.654028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.654036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.654599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.654625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.654975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.654983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.655556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.655582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.656046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.656054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.656457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.656483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.656747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.656755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.657206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.657213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.657665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.657674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.658174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.658181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.658508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.658534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.658987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.658995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.659552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.659578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.660052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.660060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.660595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.660622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.661076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.661084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.661645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.661671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.662143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.662151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.662599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.662606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.663046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.663053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.663589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.663614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.664086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.664094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.664663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.664689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.664902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.664913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.796 qpair failed and we were unable to recover it. 00:29:50.796 [2024-07-25 10:18:29.665357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.796 [2024-07-25 10:18:29.665365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.665837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.665843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.666287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.666294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.666766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.666772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.667239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.667247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.667695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.667702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.668172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.668179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.668626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.668633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.669099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.669105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.669550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.669557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.669997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.670003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.670535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.670561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.671035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.671043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.671610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.671636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.672090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.672098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.672564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.672571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.673044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.673050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.673590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.673617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.674052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.674060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.674621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.674647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.675149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.675157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.675685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.675711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.676163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.676172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.676704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.676730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.677174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.677187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.677736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.677762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.678074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.678083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.678533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.678541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.678983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.678990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.679530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.679557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.680008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.680016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.680597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.680623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.681126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.681134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.681660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.681686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.682139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.682148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.682373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.682386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.682859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.682866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.683076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.797 [2024-07-25 10:18:29.683085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.797 qpair failed and we were unable to recover it. 00:29:50.797 [2024-07-25 10:18:29.683513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.683521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.683730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.683738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.684251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.684259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.684692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.684699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.684803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.684812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.685250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.685258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.685434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.685442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.685888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.685895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.686214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.686221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.686633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.686640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.687102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.687108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.687594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.687601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.688061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.688068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.688426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.688433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.688858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.688865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.689332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.689339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.689741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.689748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.690187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.690194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.690647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.690654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.691093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.691100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.691537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.691544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.691848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.691854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.692281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.692289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.692740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.692747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.693180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.693186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.693629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.905069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.905558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.905575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.906050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.906058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.906600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.906629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.906975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.906987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.907514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.907545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.907996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.908007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.908528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.908559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.909019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.909030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.909608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.909639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.910116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.910126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.910565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.910596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.798 qpair failed and we were unable to recover it. 00:29:50.798 [2024-07-25 10:18:29.910948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.798 [2024-07-25 10:18:29.910958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.911509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.911547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.912020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.912030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.912582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.912613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.913067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.913077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.913656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.913686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.914152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.914162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.914710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.914742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.915205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.915215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.915751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.915783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.916374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.916405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.916865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.916874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.917417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.917448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.917882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.917892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.918469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.918500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.918958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.918968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.919500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.919531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.919995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.920005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.920557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.920590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.921026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.921036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.921589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.921620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.922072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.922082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.922584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.922615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.923069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.923079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.923605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.923636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.924107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.924118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.924662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.924693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.925047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.925056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:50.799 [2024-07-25 10:18:29.925606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.799 [2024-07-25 10:18:29.925638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:50.799 qpair failed and we were unable to recover it. 00:29:51.073 [2024-07-25 10:18:29.926102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-07-25 10:18:29.926118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-07-25 10:18:29.926651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-07-25 10:18:29.926682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.927139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.927149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.927596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.927606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.928076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.928085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.928539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.928547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.928993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.929002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.929541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.929572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.930045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.930055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.930467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.930497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.930953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.930964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.931462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.931493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.931962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.931973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.932401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.932432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.932889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.932900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.933520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.933552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.934026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.934036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.934626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.934657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.935117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.935127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.935563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.935572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.935892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.935900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.936444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.936475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.936939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.936949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.937492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.937523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.937990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.938000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.938640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.938671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.939128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.939138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.939709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.939740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.940214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.940226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.940669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.940677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.941131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.941139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.941584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.941593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.942060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.942069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.942620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.942650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.943085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.943096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.943614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.943623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.944056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.944065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.944606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.944637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-07-25 10:18:29.945089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-07-25 10:18:29.945099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.945552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.945583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.946060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.946088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.946513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.946522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.947049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.947057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.947471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.947501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.947993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.948005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.948442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.948474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.948688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.948701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.949162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.949171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.949640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.949649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.950117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.950126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.950573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.950582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.951027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.951035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.951571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.951602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.952071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.952081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.952302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.952315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.952758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.952768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.953229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.953238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.953699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.953707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.954147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.954156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.954621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.954630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.954978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.954986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.955452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.955461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.955954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.955962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.956492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.956524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.956990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.957000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.957562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.957593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.958086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.958096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.958575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.958585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.959050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.959060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.959608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.959639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.960092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.960102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.960621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.960652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.961123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.961134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.961598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.961608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.962051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.962060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.962576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-07-25 10:18:29.962608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-07-25 10:18:29.963063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.963073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.963633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.963665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.964128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.964138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.964687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.964718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.965191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.965209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.965752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.965784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.966363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.966393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.966855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.966866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.967417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.967449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.967922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.967931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.968487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.968518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.968972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.968983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.969547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.969579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.970056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.970066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.970607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.970638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.971092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.971102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.971565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.971574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.972054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.972063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.972604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.972635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.973088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.973099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.973646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.973677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.974110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.974122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.974519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.974529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.974978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.974987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.975546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.975577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.976044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.976054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.976604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.976635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.977091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.977101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.977516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.977526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.977993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.978002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.978545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.978577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.979045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.979060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.979616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.979647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.980118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.980128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.980675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.980706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.981162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.981172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.981728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.981758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.982371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-07-25 10:18:29.982403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-07-25 10:18:29.982859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.982870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.983407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.983438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.983903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.983913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.984410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.984441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.984896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.984906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.985361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.985370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.985839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.985847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.986322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.986331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.986776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.986784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.987226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.987235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.987669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.987677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.988141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.988150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.988608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.988617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.989066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.989075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.989581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.989612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.990084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.990094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.990558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.990567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.991016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.991025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.991582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.991613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.992053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.992063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.992483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.992513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.992973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.992984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.993545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.993577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.994009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.994019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.994561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.994592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.995054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.995063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.995542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.995573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.996042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.996052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.996517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.996547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.997004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.997014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.997454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.997485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.997953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.997964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.998525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.998557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.999015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.999028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.999501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.999533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:29.999882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:29.999892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:30.000434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:30.000465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-07-25 10:18:30.000837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-07-25 10:18:30.000847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.001293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.001302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.001653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.001663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.002129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.002137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.002559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.002568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.003443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.003455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.003911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.003920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.004384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.004393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.004764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.004773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.005091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.005101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.005567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.005576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.006111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.006119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.006454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.006464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.006941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.006948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.007294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.007303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.007744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.007752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.008238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.008248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.008716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.008725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.009198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.009222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.009609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.009618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.010073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.010081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.010544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.010575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.010919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.010930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.011428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.011460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.011836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.011846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.012286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.012295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.012785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.012794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.013272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.013287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.013719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.013731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.013991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.014002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-07-25 10:18:30.014464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-07-25 10:18:30.014477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.014803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.014815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.015313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.015327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.015806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.015817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.016195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.016216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.016759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.016779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.017153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.017166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.017399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.017409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.017898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.017908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.018415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.018448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.018872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.018884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.019357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.019367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.019783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.019792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.020110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.020118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.020596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.020605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.020956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.020964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.021373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.021382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.021713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.021721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.022244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.022253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.022608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.022617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.023075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.023084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.023428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.023437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.023894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.023902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.024397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.024405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.024855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.024863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.025315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.025324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.025830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.025838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.026271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.026280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.026663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.026672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.027013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.027022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.027473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.027481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.027952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.027960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.028479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.028509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.028861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.028871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.029323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.029332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.029780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.029790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.030244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.030253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.030720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.030729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-07-25 10:18:30.031181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-07-25 10:18:30.031190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.031646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.031655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.032102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.032110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.032556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.032565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.033024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.033033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.033567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.033597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.034054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.034064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.034617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.034648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.035145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.035160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.035639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.035669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.036126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.036137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.036684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.036715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.037172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.037183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.037730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.037761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.038415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.038445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.038873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.038883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.039454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.039485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.039958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.039969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.040507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.040538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.041007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.041017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.041419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.041449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.041921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.041932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.042160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.042172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.042506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.042516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.042972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.042980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.043559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.043590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.043850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.043861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.044315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.044324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.044550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.044564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.044781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.044791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.045256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.045266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.045635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.045644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.046094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.046102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.046554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.046563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.046779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.046790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.047247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.047256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.047707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.047715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.048190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-07-25 10:18:30.048199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-07-25 10:18:30.048644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.048652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.049098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.049107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.049330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.049338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.049540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.049550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.049740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.049749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.050194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.050208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.050509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.050518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.050995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.051003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.051460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.051468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.051919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.051927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.052207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.052219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.052660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.052668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.053118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.053126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.053575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.053584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.054105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.054114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.054512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.054542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.055008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.055018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.055575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.055607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.056062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.056072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.056545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.056576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.056889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.056899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.057452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.057482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.057929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.057939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.058386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.058417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.058882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.058892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.059341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.059350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.059799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.059808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.060158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.060166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.060616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.060625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.061102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.061111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.061564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.061573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.062042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.062050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.062572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.062602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.063072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.063082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.063480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.063510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.063859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.063870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.064112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.064121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-07-25 10:18:30.064973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-07-25 10:18:30.064989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.065548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.065579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.065958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.065969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.066653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.066684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.067112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.067126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.067607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.067622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.067876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.067899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.068229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.068258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.068894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.068919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.069176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.069184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.069326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.069334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.069860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.069868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.070318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.070327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.070830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.070841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.071291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.071300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.071702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.071710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.072160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.072168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.072526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.072535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.073000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.073008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.073457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.073488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.073946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.073956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.074405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.074415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.074863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.074871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.075442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.075472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.075944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.075954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.076500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.076531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.076929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.076938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.077443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.077473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.077944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.077954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.078501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.078531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.078864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.078875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.079357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.079366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.079688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.079697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.080148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.080157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.080622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.080631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.081108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.081117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.081556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.081564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.082003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.082011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-07-25 10:18:30.082539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-07-25 10:18:30.082569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.082901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.082911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.083370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.083380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.083866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.083875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.084441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.084471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.084904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.084914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.085362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.085371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.085827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.085835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.086285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.086294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.086730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.086739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.087078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.087086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.087555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.087564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.087775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.087789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.088118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.088126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.088482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.088490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.088693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.088707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.089036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.089044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.089480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.089488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.089845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.089853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.090185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.090193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.090637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.090646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.090992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.091001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.091540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.091570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.092039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.092049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.092696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.092727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.093207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.093217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.093746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.093775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.094210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.094221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.094635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.094665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.094889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.094902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.095439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.095468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.095934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.095945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.096586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.096616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-07-25 10:18:30.097049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-07-25 10:18:30.097059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.097599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.097630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.097971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.097981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.098533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.098566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.099030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.099040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.099516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.099545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.099919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.099930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.100471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.100500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.100969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.100979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.101524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.101555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.102034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.102044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.102591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.102622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.103321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.103339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.103805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.103815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.104192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.104206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.104670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.104680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.105110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.105117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.105556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.105563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.105983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.105990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.106476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.106504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.106942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.106951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.107467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.107496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.107938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.107951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.108490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.108520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.108972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.108981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.109523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.109552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.109995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.110003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.110558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.110586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.111030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.111039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.111618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.111647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.112085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.112094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.112588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.112596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.113024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.113031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.113468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.113497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.113950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.113959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.114500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.114530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.114997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.115007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.115539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-07-25 10:18:30.115567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-07-25 10:18:30.116022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.116031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.116491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.116520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.116965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.116974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.117547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.117576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.118045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.118055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.118599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.118628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.119124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.119135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.119575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.119604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.120057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.120067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.120612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.120642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.120998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.121007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.121550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.121579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.122044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.122054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.122600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.122629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.123014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.123024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.123562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.123590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.124048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.124057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.124604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.124633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.125122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.125132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.125582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.125611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.126121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.126131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.126544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.126552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.126999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.127007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.127546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.127575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.128032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.128044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.128499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.128528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.129023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.129033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.129578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.129606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.130062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.130071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.130612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.130640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.131079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.131088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.131614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.131642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.131997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.132007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.132551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.132579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.132919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.132928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.133479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.133506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.133945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.133953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.134515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-07-25 10:18:30.134544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-07-25 10:18:30.134994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.135003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.135534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.135562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.136015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.136023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.136606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.136633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.137082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.137090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.137533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.137541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.137987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.137994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.138521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.138548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.139049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.139058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.139588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.139616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.139944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.139952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.140556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.140583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.141051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.141059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.141595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.141623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.142066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.142075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.142619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.142646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.143103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.143111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.143727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.143755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.144209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.144219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.144730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.144757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.145186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.145195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.145730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.145757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.146197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.146214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.146804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.146832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.147275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.147284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.147642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.147650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.147851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.147866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.148302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.148310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.148517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.148528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.148964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.148972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.149398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.149405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.149849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.149855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.150278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.150285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.150706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.150714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.150959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.150966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.151406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.151413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.151864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.151870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.152303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-07-25 10:18:30.152310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-07-25 10:18:30.152799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.152806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.153125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.153133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.153621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.153628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.153811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.153820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.154292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.154300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.154798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.154805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.155248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.155256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.155794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.155802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.156231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.156238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.156727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.156733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.157244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.157251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.157764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.157771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.158206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.158214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.158546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.158553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.159016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.159022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.159570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.159598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.160035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.160044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.160608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.160635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.161137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.161146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.161650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.161658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.162110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.162117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.162670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.162698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.163212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.163221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.163659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.163666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.163991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.163999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.164546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.164574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.165095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.165104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.165534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.165562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.166013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.166025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.166614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.166641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.167141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.167151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.167683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.167710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-07-25 10:18:30.168081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-07-25 10:18:30.168090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.168559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.168566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.168992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.168999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.169569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.169597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.170049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.170058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.170695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.170722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.171424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.171452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.171967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.171976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.172576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.172604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.172964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.172972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.173535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.173562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.174047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.174056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.174433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.174460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.174943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.174951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.175377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.175406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.175869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.175877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.176430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.176458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.176908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.176917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.177393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.177400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.177849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.177855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.178413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.178443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.178799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.178808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.179258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.179266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.179840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.179847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.180193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.180205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.180564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.180570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.181013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.181019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.181587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.181615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.182087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.182095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.182608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.182616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.182823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.182830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.183274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.183281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.183780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.183786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.184234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.184242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.184594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.184601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.185038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.185045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.185480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.185490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.185918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.185924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-07-25 10:18:30.186205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-07-25 10:18:30.186212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.186554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.186561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.187001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.187007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.187441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.187468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.187927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.187936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.188426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.188454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.188938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.188947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.189451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.189478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.189881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.189889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.190349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.190358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.190672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.190679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.191221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.191229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.191779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.191785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.192221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.192229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.192598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.192604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.193070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.193076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.193420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.193427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-07-25 10:18:30.193906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-07-25 10:18:30.193913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.363 [2024-07-25 10:18:30.194332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.363 [2024-07-25 10:18:30.194340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.363 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.194783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.194789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.195209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.195217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.195522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.195529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.195876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.195883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.196349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.196357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.196822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.196829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.197228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.197236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.198056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.198074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.198500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.198508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.198959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.198965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.199396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.199404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.199832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.199838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.199920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.199930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.200273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.200281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.200715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.200723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.201146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.201153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.201631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.201638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.202076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.202083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.202585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.202592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.203038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.203048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.203507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.203535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.204028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.204037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.204644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.204671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.204889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.204900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.205261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.205269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.205771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.205778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.206124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.206131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.206498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.206505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.206845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.206852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.207289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.207296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.207744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.207751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.208115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.208121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.208563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.208570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.209018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.209024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.209521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.209528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.209958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.209964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.364 [2024-07-25 10:18:30.210395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.364 [2024-07-25 10:18:30.210403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.364 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.210705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.210712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.211139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.211146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.211593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.211599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.211935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.211942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.212501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.212528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.212886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.212894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.213248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.213255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.213802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.213809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.214249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.214256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.214604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.214611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.215138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.215145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.215570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.215578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.216013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.216020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.216485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.216492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.216890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.216897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.217450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.217477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.217949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.217957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.218479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.218506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.218915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.218925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.219477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.219504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.219966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.219975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.220519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.220546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.220953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.220961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.221486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.221513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.221988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.221996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.222588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.222616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.223092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.223100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.223515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.223522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.223980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.223988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.224547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.224574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.225015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.225024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.225556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.225584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.226061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.226069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.226598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.226626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.227064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.227073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.227521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.365 [2024-07-25 10:18:30.227547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.365 qpair failed and we were unable to recover it. 00:29:51.365 [2024-07-25 10:18:30.227989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.227997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.228560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.228588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.229052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.229062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.229632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.229659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.230127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.230136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.230721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.230749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.231082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.231090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.231533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.231542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.231982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.231988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.232517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.232544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.232984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.232993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.233497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.233523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.233991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.234000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.234469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.234500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.234943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.234952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.235482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.235509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.235984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.235993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.236546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.236573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.237033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.237041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.237580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.237607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.237981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.237989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.238572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.238599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.239040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.239049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.239495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.239522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.239999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.240008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.240547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.240575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.240968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.240978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.241544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.241572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.241914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.241924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.242519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.242546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.366 [2024-07-25 10:18:30.242990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.366 [2024-07-25 10:18:30.242998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.366 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.243553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.243580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.244049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.244058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.244659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.244686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.245211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.245221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.245739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.245766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.246423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.246450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.246891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.246900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.247425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.247452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.247903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.247911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.248442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.248468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.248938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.248946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.249156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.249167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.249622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.249630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.249842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.249852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.250313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.250321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.250781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.250788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.251222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.251230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.251655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.251662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.252130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.252137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.252630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.252637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.253067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.253074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.253609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.253637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.253951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.253964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.254317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.254325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.254780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.254787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.255133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.255140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.255592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.255599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.256048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.256054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.256622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.256649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.257182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.257191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.257755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.257782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.258413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.258440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.258892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.258900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.259495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.259523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-07-25 10:18:30.259977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-07-25 10:18:30.259985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.260545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.260572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.260921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.260931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.261479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.261506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.261973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.261982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.262541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.262568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.263008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.263016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.263503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.263530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.263971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.263980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.264498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.264525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.264969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.264977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.265512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.265540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.266007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.266016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.266588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.266615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.267062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.267070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.267582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.267610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.267963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.267972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.268458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.268485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.268928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.268937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.269496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.269523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.269989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.269998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.270543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.270570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.271021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.271030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.271571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.271599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.272071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.272079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.272615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.272642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.272976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.272985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.273564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.273592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.274033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.274044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-07-25 10:18:30.274575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-07-25 10:18:30.274602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.275097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.275105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.275492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.275519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.275982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.275990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.276503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.276531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.276976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.276984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.277507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.277534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.277970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-07-25 10:18:30.277979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-07-25 10:18:30.278579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.278606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.278946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.278956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.279508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.279535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.279881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.279890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.280356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.280364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.280779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.280786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.281241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.281248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.281673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.281679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.282150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.282156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.282594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.282602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.283055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.283063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.283588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.283615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.283994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.284002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.284542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.284569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.285023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.285033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.285593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.285621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.286056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.286065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.286461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.286488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.286924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.286933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.287493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.287520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.287958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.287967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.288515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.288542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.288978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.288988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.289514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.289542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.289995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.290004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.290543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.290572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.291036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.291045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.291608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.291636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.291975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.291984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.292524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.292552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.293021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.293031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.293525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.293558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.294022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-07-25 10:18:30.294031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-07-25 10:18:30.294522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.294550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.295023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.295033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.295602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.295630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.296070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.296079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.296678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.296705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.297044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.297053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.297591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.297618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.298075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.298084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.298673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.298700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.299179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.299188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.299783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.299810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.300427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.300454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.300968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.300977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.301396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.301423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.301895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.301903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.302452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.302479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.302825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.302834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.303276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.303284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.303734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.303741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.304167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.304174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.304613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.304620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.305065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.305072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.305659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.305687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.306128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.306137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.306676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.306703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.306918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.306929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.307368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.307377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.307824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.307831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.308259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.308266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.308690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.308696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.309125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.309132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.309628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.309636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.310077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.310083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.310650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.310677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.311140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.311149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.311770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.311798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.312380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.312406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-07-25 10:18:30.312906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-07-25 10:18:30.312914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.313513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.313543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.314047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.314056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.314626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.314653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.315146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.315155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.315682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.315710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.315937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.315948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.316515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.316542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.317014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.317022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.317580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.317607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.318062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.318070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.318630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.318660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.319160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.319169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.319695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.319721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.320130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.320138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.320754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.320781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.321454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.321481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.321883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.321901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.322123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.322129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.322595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.322602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.323116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.323122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.323556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.323563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.323894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.323901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.324441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.324469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.324919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.324927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.325477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.325505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.325969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.325977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.326478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.326505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.327006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.327015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.327646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.327673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.328126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.328135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.328749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.328777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.329431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.329458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.329793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.329801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.330182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.330191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.330607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.330615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.331037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.331044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-07-25 10:18:30.331605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-07-25 10:18:30.331632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.331979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.331988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.332525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.332552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.332996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.333005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.333465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.333495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.334003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.334012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.334509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.334536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.335019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.335027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.335576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.335604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.336040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.336048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.336624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.336651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.337158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.337167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.337795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.337822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.338448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.338476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.338915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.338923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.339470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.339497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.339995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.340003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.340554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.340581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.341023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.341032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.341583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.341610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.342078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.342086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.342537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.342544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.342973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.342979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.343515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.344012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.344020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.344623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.344650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.345106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.345114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.345462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.345488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.345982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.345991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.346559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.346586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.347084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.347092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.347538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.347546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.347987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.347993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.348572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.348600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.349100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.349109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.349543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.349550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-07-25 10:18:30.349981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-07-25 10:18:30.349988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.350542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.350569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.351013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.351022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.351600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.351627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.352022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.352030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.352620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.352647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.353099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.353108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.353694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.353721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.354224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.354245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.354736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.354743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.355173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.355180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.355546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.355553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.355999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.356006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.356555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.356582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.357053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.357061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.357605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.357632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.358118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.358126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.358724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.358751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.359414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.359441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.359908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.359916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.360500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.360527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.360874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.360882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.361353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.361361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.361827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.361833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.362138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.362146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.362398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.362405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.362732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.362745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.363196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.363208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.363664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.363670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.363881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-07-25 10:18:30.363893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-07-25 10:18:30.364306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.364313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.364759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.364766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.365190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.365197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.365639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.365645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.366070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.366076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.366722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.366750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.366972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.366984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.367551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.367578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.368032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.368041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.368623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.368650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.369119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.369128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.369473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.369500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.369997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.370005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.370442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.370469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.370936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.370945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.371463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.371491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.371951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.371960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.372495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.372522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.372925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.372937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.373486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.373514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.373962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.373971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.374580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.374607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.374825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.374837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.375299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.375307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.375740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.375747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.376179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.376187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.376413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.376423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.376873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.376880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.377380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.377388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.377855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.377862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.378297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.378304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.378743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.378749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.379251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.379258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.379673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.379680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.380140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.380147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-07-25 10:18:30.380634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-07-25 10:18:30.380641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.381083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.381089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.381586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.381594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.382019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.382026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.382541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.382568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.383027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.383036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.383569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.383595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.384064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.384073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.384550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.384577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.385019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.385027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.385609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.385636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.386066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.386075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.386658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.386686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.387128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.387136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.387693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.387720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.388157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.388166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.388729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.388756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.389194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.389209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.389727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.389753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.390415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.390442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.390787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.390795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.391422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.391449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.391918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.391927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.392472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.392504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.392936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.392945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.393491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.393518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.393987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.393996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.394524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.394551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.394897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.394905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.395442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.395450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.395872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.395880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.396182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.396191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.396637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.396645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.397064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.397070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.397648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.397675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-07-25 10:18:30.398120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-07-25 10:18:30.398129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.398732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.398759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.399210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.399219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.399739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.399766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.400409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.400436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.400890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.400900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.401465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.401492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.401987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.401995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.402610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.402637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.403135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.403144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.403713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.403740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.404408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.404435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.404904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.404913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.405443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.405470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.405918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.405927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.406139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.406150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.406419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.406427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.406869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.406876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.407220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.407227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.407706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.407712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.408141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.408148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.408489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.408496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.408971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.408979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.409459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.409467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.409688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.409698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.410157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.410163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.410593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.410601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.411034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.411042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.411480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.411511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.411949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.411957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.412496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.412524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.412976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.412985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.413429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.413457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.413900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.413909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.414474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.414501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.414836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-07-25 10:18:30.414845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-07-25 10:18:30.415289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.415296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.415752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.415760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.416210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.416218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.416748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.416756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.417194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.417207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.417674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.417680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.418162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.418169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.418603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.418610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.419043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.419049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.419649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.419676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.420116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.420125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.420566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.420594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.421047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.421056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.421595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.421622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.422125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.422134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.422677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.422704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.423176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.423185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.423704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.423731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.424415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.424441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.424844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.424852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.425160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.425167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.425506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.425514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.425851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.425859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.426319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.426326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.426778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.426785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.427235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.427242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.427685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.427692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.428133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.428140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.428632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.428639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.428979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.428985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.429286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.429293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.429725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.429732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.430086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.430094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.430573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.430580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.431030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.431038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.431588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.431615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-07-25 10:18:30.432080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-07-25 10:18:30.432088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.432553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.432561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.433017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.433024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.433547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.433575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.434043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.434052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.434592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.434620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.435082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.435090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.435603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.435610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.436039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.436046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.436469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.436496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.436940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.436949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.437439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.437467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.437943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.437952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.438472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.438499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.438854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.438863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.439307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.439314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.439872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.439879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.440091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.440102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.440546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.440554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.440999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.441006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.441454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.441461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.441892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.441898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.442405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.442413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.442859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.442867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-07-25 10:18:30.443433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-07-25 10:18:30.443460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.443902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.443911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.444243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.444251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.444702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.444709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.445144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.445151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.445615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.445622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.446057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.446064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.446537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.446564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.447095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.447104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.447604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.447612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.447895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.447902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.448520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.448547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.449056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.449067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.449511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.449539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.449888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.449897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.450481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.450507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.451010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.451019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.451479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.451506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.451984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.451993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.452603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.452630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.452985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.452994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.453553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.453581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.454021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.454029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.454467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.454495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.454931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.454940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.455534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.455561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.456049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.456058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.456607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.456634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.457152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.457160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.457700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.457728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.458216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.458226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.458708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.458716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.459058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.459065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.459534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.459562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.460017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.460025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.460499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.460526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.460968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-07-25 10:18:30.460977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-07-25 10:18:30.461193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.461219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.461739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.461766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.462430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.462457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.462811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.462820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.463449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.463476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.463976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.463985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.464522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.464549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.465046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.465054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.465621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.465648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.466097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.466106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.466314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.466325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.466804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.466810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.467224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.467231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.467566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.467573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.467995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.468002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.468428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.468435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.468882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.468889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.469325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.469332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.469649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.469657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.470002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.470009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.470465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.470472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.470943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.470950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.471525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.471552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.471920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.471928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.472375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.472382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.472940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.472946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.473494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.473521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.474003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.474011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.474431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.474459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.474976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.474985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.475592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.475619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.476090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.476099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.476450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.476457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.476917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.476926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.477505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.477532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.477874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-07-25 10:18:30.477883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-07-25 10:18:30.478333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.478340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.478797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.478805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.479155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.479162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.479534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.479541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.479953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.479960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.480510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.480537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.480972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.480984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.481469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.481496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.481950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.481959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.482496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.482523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.482988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.482998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.483554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.483581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.484028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.484036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.484578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.484605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.485131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.485140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.485720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.485747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.486143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.486152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.486698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.486725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.487223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.487241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-07-25 10:18:30.487735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-07-25 10:18:30.487742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.651 [2024-07-25 10:18:30.488215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.651 [2024-07-25 10:18:30.488225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.651 qpair failed and we were unable to recover it. 00:29:51.651 [2024-07-25 10:18:30.488573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.651 [2024-07-25 10:18:30.488581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.651 qpair failed and we were unable to recover it. 00:29:51.651 [2024-07-25 10:18:30.489022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.651 [2024-07-25 10:18:30.489030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.651 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.489386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.489393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.489821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.489829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.490193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.490209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.490632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.490640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.491124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.491131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.491486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.491493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.491922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.491929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.492485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.492512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.492959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.492968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.493548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.493575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.494028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.494038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.494475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.494503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.494982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.494990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.495527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.495555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.496012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.496021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.496564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.496592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.496798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.496810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.497280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.497289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.497367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.497377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.497824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.497832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.498309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.498317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.498737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.498745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.499212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.499220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.499679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.499689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.500117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.500124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.500545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.500553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.500995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.501002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.501457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.501464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.501928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.501936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.502389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.502395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.502845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.502851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.503331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.503338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.503777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.503783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.504209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.504668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.504674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.505032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.505039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.505500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.652 [2024-07-25 10:18:30.505506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.652 qpair failed and we were unable to recover it. 00:29:51.652 [2024-07-25 10:18:30.505990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.505997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.506441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.506468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.506936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.506945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.507500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.507526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.508051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.508059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.508595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.508622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.509071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.509079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.509618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.509645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.510146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.510155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.510715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.510743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.511194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.511210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.511844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.511871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.512474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.512500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.512841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.512850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.513413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.513440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.513672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.513683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.514160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.514167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.514658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.514665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.515087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.515093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.515613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.515620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.516061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.516068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.516659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.516686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.517039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.517047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.517456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.517483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.517963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.517972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.518523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.518550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.518949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.518960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.519538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.519565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.520058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.520066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.520613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.520640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.521151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.521159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.521744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.521771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.522426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.522453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.522961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.522969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.523409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.523436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.523933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.523942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.524508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.524535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.653 qpair failed and we were unable to recover it. 00:29:51.653 [2024-07-25 10:18:30.524881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.653 [2024-07-25 10:18:30.524890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.525473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.525500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.525989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.525998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.526515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.526542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.527060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.527068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.527604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.527631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.528154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.528162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.528714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.528741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.529430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.529457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.529803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.529812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.530299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.530306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.530791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.530798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.531237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.531244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.531764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.531770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.532207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.532215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.532657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.532663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.533109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.533116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.533515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.533522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.533998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.534004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.534551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.534578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.535043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.535052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.535490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.535517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.535864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.535872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.536441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.536468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.536921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.536930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.537487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.537514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.537968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.537977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.538506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.538533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.538984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.538993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.539477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.539507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.539855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.539864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.540214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.540222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.540560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.540567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.541007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.541014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.541248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.541255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.541734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.541742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.542209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.542217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.542706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.654 [2024-07-25 10:18:30.542713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.654 qpair failed and we were unable to recover it. 00:29:51.654 [2024-07-25 10:18:30.543159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.543167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.543610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.543617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.544055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.544061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.544625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.544652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.545134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.545142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.545675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.545683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.546102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.546109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.546479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.546506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.546986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.546994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.547464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.547492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.547970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.547979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.548458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.548485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.548977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.548986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.549482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.549510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.549873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.549882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.550436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.550463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.550681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.550692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.551122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.551129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.551276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.551287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.551543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.551552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.551906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.551913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.552337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.552344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.552764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.552771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.553236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.553243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.553561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.553567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.553996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.554002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.554462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.554468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.554910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.554917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.555345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.555351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.555798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.555804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.556287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.556293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.556817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.556826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.557254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.557261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.557781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.557787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.558227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.558234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.558725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.558732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.559169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.559175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.655 [2024-07-25 10:18:30.559603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.655 [2024-07-25 10:18:30.559610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.655 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.559949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.559956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.560289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.560296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.560732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.560739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.561075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.561082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.561450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.561458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.561915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.561921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.562459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.562486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.562937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.562946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.563495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.563522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.564027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.564036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.564625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.564652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.565109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.565118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.565443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.565451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.565879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.565886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.566112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.566124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.566368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.566376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.566659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.566667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.567028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.567035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.567402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.567410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.567858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.567864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.567983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.567993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.568457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.568464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.568904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.568910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.569280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.569287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.569776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.569782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.570221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.570228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.570476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.570485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.570880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.570887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.571319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.571326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.656 [2024-07-25 10:18:30.571666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.656 [2024-07-25 10:18:30.571673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.656 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.572011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.572017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.572473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.572480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.572971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.572977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.573420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.573429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.573910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.573917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.574445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.574472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.574941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.574950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.575458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.575486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.575987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.575995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.576579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.576607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.577076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.577084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.577502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.577510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.577975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.577981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.578530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.578558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.579000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.579009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.579465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.579491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.579950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.579959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.580504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.580531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.580904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.580912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.581481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.581507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.582017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.582026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.582580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.582607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.583111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.583119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.583456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.583465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.583911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.583917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.584482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.584509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.584887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.584896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.585389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.585397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.585841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.585848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.586423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.586450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.586931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.586939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.587398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.587405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.587850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.587856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.588476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.588503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.588936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.588944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.589447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.589474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.589831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.589840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.657 qpair failed and we were unable to recover it. 00:29:51.657 [2024-07-25 10:18:30.590327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.657 [2024-07-25 10:18:30.590334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.590820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.590826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.591280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.591287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.591737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.591743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.592239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.592246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.592600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.592606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.593052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.593062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.593503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.593510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.593934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.593941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.594468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.594496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.595027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.595035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.595622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.595649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.596110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.596118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.596483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.596491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.596949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.596956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.597470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.597497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.597970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.597978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.598531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.598558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.598974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.598982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.599208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.599220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.599682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.599690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.600198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.600209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.600486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.600512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.600751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.600762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.601057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.601065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.601417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.601425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.601877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.601884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.602326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.602333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.602754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.602760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.603206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.603213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.603654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.603660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.604050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.604056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.604683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.604711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.605157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.605165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.605708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.605736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.606080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.606089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.606628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.606636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.658 [2024-07-25 10:18:30.606980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.658 [2024-07-25 10:18:30.606986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.658 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.607522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.607548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.607897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.607906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.608451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.608478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.608842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.608850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.609173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.609180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.609496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.609504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.609836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.609843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.610287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.610294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.610768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.610777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.611285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.611292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.611740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.611747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.612179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.612186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.612640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.612648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.613083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.613090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.613590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.613596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.614061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.614068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.614621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.614648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.615091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.615099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.615564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.615572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.616001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.616007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.616572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.616599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.617039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.617048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.617633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.617660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.618127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.618135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.618743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.618770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.619225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.619242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.619692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.619700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.620179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.620186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.620417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.620429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.620880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.620887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.621391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.621398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.621848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.621855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.622069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.622079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.622484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.622512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.623030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.623039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.623583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.623611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.623964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.623973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.659 qpair failed and we were unable to recover it. 00:29:51.659 [2024-07-25 10:18:30.624445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.659 [2024-07-25 10:18:30.624472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.624937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.624946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.625504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.625531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.625966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.625975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.626494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.626521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.626987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.626996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.627566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.627594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.628032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.628041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.628586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.629084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.629092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.629541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.629549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.630057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.630068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.630599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.630625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.631065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.631073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.631475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.631503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.631864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.631874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.632414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.632441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.632890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.632899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.633474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.633500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.633941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.633950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.634482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.634510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.634948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.634957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.635504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.635530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.635972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.635982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.636501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.636528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.637035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.637043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.637578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.637605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.638108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.638117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.638406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.638414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.638880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.638886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.639423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.639456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.639910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.639919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.640368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.640376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.640847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.640854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.641404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.641431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.641870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.641879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.642314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.642322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.642774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.660 [2024-07-25 10:18:30.642781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.660 qpair failed and we were unable to recover it. 00:29:51.660 [2024-07-25 10:18:30.643214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.643221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.643667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.643673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.644106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.644563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.644570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.644911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.644918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.645362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.645369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.645832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.645839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.646300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.646308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.646714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.646722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.647168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.647176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.647418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.647425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.647861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.647868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.648290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.648297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.648762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.648772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.649244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.649252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.649699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.649706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.650145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.650152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.650498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.650505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.650911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.650919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.651270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.651278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.651692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.651699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.652148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.652155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.652642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.652650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.653116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.653565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.653574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.653687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.653700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.654117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.654126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.654587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.654594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.655023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.655031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.655481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.655489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.655939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.655947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.656580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.661 [2024-07-25 10:18:30.656608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.661 qpair failed and we were unable to recover it. 00:29:51.661 [2024-07-25 10:18:30.657083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.657093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.657551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.657560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.657868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.657877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.658367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.658375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.658843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.658850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.659318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.659326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.659760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.659768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.660208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.660216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.660571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.660578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.660905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.660912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.661484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.661512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.661987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.661998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.662449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.662477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.662932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.662942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.663434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.663462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.663916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.663926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.664413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.664442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.664773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.664783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.665085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.665093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.665438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.665447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.665912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.665920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.666359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.666371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.666694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.666703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.667030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.667038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.667491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.667500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.667801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.667809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.668252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.668260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.668690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.668698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.669165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.669172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.669615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.669622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.670066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.670073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.670606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.670634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.671106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.671116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.671565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.671574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.672024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.672032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.672417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.672445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.672867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.672876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.662 [2024-07-25 10:18:30.673548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.662 [2024-07-25 10:18:30.673575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.662 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.673938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.673947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.674428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.674455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.674983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.674992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.675576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.675604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.676037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.676046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.676589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.676617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.676977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.676986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.677549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.677576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.678028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.678036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.678534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.678561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.678927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.678935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.679498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.679525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.679969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.679978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.680408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.680437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.680901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.680910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.681474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.681502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.681949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.681958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.682498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.682525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.682987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.682996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.683599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.683626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.684133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.684141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.684632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.684639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.685068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.685075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.685665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.685697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.686057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.686066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.686627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.686653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.687146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.687155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.687630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.687657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.688100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.688109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.688730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.688757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.689404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.689431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.689813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.689822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.690415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.690443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.690791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.690800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.691123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.691130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.691595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.691602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.692040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.692048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.663 qpair failed and we were unable to recover it. 00:29:51.663 [2024-07-25 10:18:30.692469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.663 [2024-07-25 10:18:30.692496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.692978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.692986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.693629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.693656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.694163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.694172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.694751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.694779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.695422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.695450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.695798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.695806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.696105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.696112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.696462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.696471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.696917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.696924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.697347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.697354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.697811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.697817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.698244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.698251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.698657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.698664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.699120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.699127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.699546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.699553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.699940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.699947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.700293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.700301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.700742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.700749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.701211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.701219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.701560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.701567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.702012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.702018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.702211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.702219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.702587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.702594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.703023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.703029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.703543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.703550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.703980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.703986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.704434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.704461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.704905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.704913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.705334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.705343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.705816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.705824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.706314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.706321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.706762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.706768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.707195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.707213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.707644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.707650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.707987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.707993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.708582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.708610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.709126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.664 [2024-07-25 10:18:30.709134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.664 qpair failed and we were unable to recover it. 00:29:51.664 [2024-07-25 10:18:30.709251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.709263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.709744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.709751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.710179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.710186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.710665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.710672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.711166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.711173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.711691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.711717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.711936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.711947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.712390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.712398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.712734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.712741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.713177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.713183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.713660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.713667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.714128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.714135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.714488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.714494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.714961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.714967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.715501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.715529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.716041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.716053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.716578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.716605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.717056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.717066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.717547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.717574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.718061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.718069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.718652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.718680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.719114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.719123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.719731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.719758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.720406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.720433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.720866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.720875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.721403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.721430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.721929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.721940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.722482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.722509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.723012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.723021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.723536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.723564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.724018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.724026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.724556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.665 [2024-07-25 10:18:30.724584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.665 qpair failed and we were unable to recover it. 00:29:51.665 [2024-07-25 10:18:30.725076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.725085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.725510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.725519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.725964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.725972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.726501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.726528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.727010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.727019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.727549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.727576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.727795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.727807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.728266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.728274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.728768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.728774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.729204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.729211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.729750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.729756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.730080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.730087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.730674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.730681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.730886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.730896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.731512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.731539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.732035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.732043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.732258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.732270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.732723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.732732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.733078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.733086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.733537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.733544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.733890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.733897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.734340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.734347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.734762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.734769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.735216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.735225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.735681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.735688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.736151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.736158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.736587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.736595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.737062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.737069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.737608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.737635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.738078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.738087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.738468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.738475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.738896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.738902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.739346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.739353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.739794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.739800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.740136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.740143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.740504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.740510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.740938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.740945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.741391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.666 [2024-07-25 10:18:30.741399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.666 qpair failed and we were unable to recover it. 00:29:51.666 [2024-07-25 10:18:30.741843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.741849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.742281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.742288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.742720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.742727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.743173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.743181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.743539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.743546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.743973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.743979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.744541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.744568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.745011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.745020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.745622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.745649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.746099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.746108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.746561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.746569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.746988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.746995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.747569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.747596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.748067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.748075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.748649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.748676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.749118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.749127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.749691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.749718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.750154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.750162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.750694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.750721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.751058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.751066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.751598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.751625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.752092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.752101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.752689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.752716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.753068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.753077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.753633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.753660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.754099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.754111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.754458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.754466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.754897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.754904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.755412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.755439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.755904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.755913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.756480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.756507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.756948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.756957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.757478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.757505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.757946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.757954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.758474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.758501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.758714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.758724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.759180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.667 [2024-07-25 10:18:30.759188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.667 qpair failed and we were unable to recover it. 00:29:51.667 [2024-07-25 10:18:30.759629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.759637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.760062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.760069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.760609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.760636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.761078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.761087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.761512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.761519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.761948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.761954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.762414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.762441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.762880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.762889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.763418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.763445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.763888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.763896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.764316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.764324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.764849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.764855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.765291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.765299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.765745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.765752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.766221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.766228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.766688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.766695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.767118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.767126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.767560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.767568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.768074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.768081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.768607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.768634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.769084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.769092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.769528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.769536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.769995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.770003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.770606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.770633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.771100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.771108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.771559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.771567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.771992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.771999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.772529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.772556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.773022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.773034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.773565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.773593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.774033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.774042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.774565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.774592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.775075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.775083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.668 [2024-07-25 10:18:30.775550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.668 [2024-07-25 10:18:30.775558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.668 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.775984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.775993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.776574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.776603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.777047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.777056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.777470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.777497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.777957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.777967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.778535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.778562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.779092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.779101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.779434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.779441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.779791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.779798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.780241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.780248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.780656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.780662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.781137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.781143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.781311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.781323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.781840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.781847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.782288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.937 [2024-07-25 10:18:30.782295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.937 qpair failed and we were unable to recover it. 00:29:51.937 [2024-07-25 10:18:30.782730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.782737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.783183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.783190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.783646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.783654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.784117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.784124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.784541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.784548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.784886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.784892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.785339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.785346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.785780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.785787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.786255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.786262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.786673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.786680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.787022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.787029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.787357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.787370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.787881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.787887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.788225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.788232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.788715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.788721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.788921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.788929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.789389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.789396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.789850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.789856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.790277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.790284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.790760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.790769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.791195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.791206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.791532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.791538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.791976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.791982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.792509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.792536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.793014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.793023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.793557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.793583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.793948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.793957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.794503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.794530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.794999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.795008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.795491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.795518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.795866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.795875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.796180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.796188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.796550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.938 [2024-07-25 10:18:30.796557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.938 qpair failed and we were unable to recover it. 00:29:51.938 [2024-07-25 10:18:30.797046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.797053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.797620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.797647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.798100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.798108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.798724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.798751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.799412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.799439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.799908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.799917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.800488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.800515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.801013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.801021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.801487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.801513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.801984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.801993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.802574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.802602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.802956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.802964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.803494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.803521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.803990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.803998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.804609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.804636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.805060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.805069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.805623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.805649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.806152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.806161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.806719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.806746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.807419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.807446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.807887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.807895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.808433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.808460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.808930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.808939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.809504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.809532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.809864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.809873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.810085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.810093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.810679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.810690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.811014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.811022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.811484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.811492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.811919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.811926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.812432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.812459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.812897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.812905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.813512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.813539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.813982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.813991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.814602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.814629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.814850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.814862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.815210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.939 [2024-07-25 10:18:30.815219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.939 qpair failed and we were unable to recover it. 00:29:51.939 [2024-07-25 10:18:30.815756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.815764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.816054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.816062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.816607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.816634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.817072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.817080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.817505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.817532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.818008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.818017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.818557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.818583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.819029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.819038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.819573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.819600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.820069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.820077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.820612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.820639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.821094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.821103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.821645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.821673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.822115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.822124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.822629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.822636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.823052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.823059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.823583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.823610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.824076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.824084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.824619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.824647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.825094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.825104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.825637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.825645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.826059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.826066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.826605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.826632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.827072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.827081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.827617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.827644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.828150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.828159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.828688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.828715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.829163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.829172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.829696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.829723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.830190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.830217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.830726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.830753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.831182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.831191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.831724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.831751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.940 [2024-07-25 10:18:30.832402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.940 [2024-07-25 10:18:30.832429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.940 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.832900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.832909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.833428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.833455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.833897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.833906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.834504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.834531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.834996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.835004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.835545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.835572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.836019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.836028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.836647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.836673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.837115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.837124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.837709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.837736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.838183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.838191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.838719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.838746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.839217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.839227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.839729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.839736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.840164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.840171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.840604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.840611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.841032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.841038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.841562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.841588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.842021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.842030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.842578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.842605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.843070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.843078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.843681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.843708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.844141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.844150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.844601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.844628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.844963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.844972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.845468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.845495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.845937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.845946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.846413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.846440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.846905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.846914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.847398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.847431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.847868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.847876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.848295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.848302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.848745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.848752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.849173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.849180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.849550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.849558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.850028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.850038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.850562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.941 [2024-07-25 10:18:30.850588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.941 qpair failed and we were unable to recover it. 00:29:51.941 [2024-07-25 10:18:30.851038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.851046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.851574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.851601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.852076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.852085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.852520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.852528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.852996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.853003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.853529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.853556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.854024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.854033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.854597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.854625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.854970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.854979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.855526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.855553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.855994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.856002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.856507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.856534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.856974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.856982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.857523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.857550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.857988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.857996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.858543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.858570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.859007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.859015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.859445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.859453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.859929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.859936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.860457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.860484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.860927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.860936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.861481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.861507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.861976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.861985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.862552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.862579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.863066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.863074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.863597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.863625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.864120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.864128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.864694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.864722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.865159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.865167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.942 [2024-07-25 10:18:30.865631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.942 [2024-07-25 10:18:30.865658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.942 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.866134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.866143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.866579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.866587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.867006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.867014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.867584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.867611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.867965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.867974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.868519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.868546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.868988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.868997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.869519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.869546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.869893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.869904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.870465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.870492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.870936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.870945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.871492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.871519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.871987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.871996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.872507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.872534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.872882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.872890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.873368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.873375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.873813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.873820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.874312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.874319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.874765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.874772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.875204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.875211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.875680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.875687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.876154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.876161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.876561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.876587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.877040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.877049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.877567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.877594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.878031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.878040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.878562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.878589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.879032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.879040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.879572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.879599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.880069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.880077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.880603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.880630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.881075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.881083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.881486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.881513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.881980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.881989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.882505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.882533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.882979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.882988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.883582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.883609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.943 qpair failed and we were unable to recover it. 00:29:51.943 [2024-07-25 10:18:30.883828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.943 [2024-07-25 10:18:30.883839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.884300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.884308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.884783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.884790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.885219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.885226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.885404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.885413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.885528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.885535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.885943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.885950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.886379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.886386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.886581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.886590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.887000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.887006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.887448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.887455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.887921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.887931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.888394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.888402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.888898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.888905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.889106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.889115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.889586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.889593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.890015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.890021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.890489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.890496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.890916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.890922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.891496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.891523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.891842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.891851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.892321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.892329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.892762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.892768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.893206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.893213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.893683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.893690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.894159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.894166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.894697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.894723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.895165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.895174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.895604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.895632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.896072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.896081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.896643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.896670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.897117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.897125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.897626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.897653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.944 [2024-07-25 10:18:30.898088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.944 [2024-07-25 10:18:30.898096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.944 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.898541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.898549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.898944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.898951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.899485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.899512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.899980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.899988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.900388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.900416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.900871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.900881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.901463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.901490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.901973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.901981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.902509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.902536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.902984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.902993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.903534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.903561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.904061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.904070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.904603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.904630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.905072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.905080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.905514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.905541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.906008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.906016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.906547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.906574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.907018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.907031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.907583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.907611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.908048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.908056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.908588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.908615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.909056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.909064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.909585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.909612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.909970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.909979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.910578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.910605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.911081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.911090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.911608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.911635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.912146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.912155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.912504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.912512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.912956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.912962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.913520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.913547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.913992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.914001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.914441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.914468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.914959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.914968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.915458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.915485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.915953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.915961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.916505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.916532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.945 qpair failed and we were unable to recover it. 00:29:51.945 [2024-07-25 10:18:30.916974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.945 [2024-07-25 10:18:30.916982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.917536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.917563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.918037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.918045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.918591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.918619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.919079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.919088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.919578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.919586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.920016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.920023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.920562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.920589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.921024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.921032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.921466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.921494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.921967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.921976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.922483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.922510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.922954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.922962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.923509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.923536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.923866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.923875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.924436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.924463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.924909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.924917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.925269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.925276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.925603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.925610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.926082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.926089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.926426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.926436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.926673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.926680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.927019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.927026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.927467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.927474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.927820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.927826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.928278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.928285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.928722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.928728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.929062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.929074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.929548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.929555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.929997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.930003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.930527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.930554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.931019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.931028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.931584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.931611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.931966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.946 [2024-07-25 10:18:30.931975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.946 qpair failed and we were unable to recover it. 00:29:51.946 [2024-07-25 10:18:30.932580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.932607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.933110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.933118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.933419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.933428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.933876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.933883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.934093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.934104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.934540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.934547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.934975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.934982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.935192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.935211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.935636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.935643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.936078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.936085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.936626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.936634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.936832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.936841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.937300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.937307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.937738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.937745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.938195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.938205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.938558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.938564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.938983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.938990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.939445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.939472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.939984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.939992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.940512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.940540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.941070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.941078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.941663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.941690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.941922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.941933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.942530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.942557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.943027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.943036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.943518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.943546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.944064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.944073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.944699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.944726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.945174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.945182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.945552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.945579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.946052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.946060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.947 qpair failed and we were unable to recover it. 00:29:51.947 [2024-07-25 10:18:30.946575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.947 [2024-07-25 10:18:30.946602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.946881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.946889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.947471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.947498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.947971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.947981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.948500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.948527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.948870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.948879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.949318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.949326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.949778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.949785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.950243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.950250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.950682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.950688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.951113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.951120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.951649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.951656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.952084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.952091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.952563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.952570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.953007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.953013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.953529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.953557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.954026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.954035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.954540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.954567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.955009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.955017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.955443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.955470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.955782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.955792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.956271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.956279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.956617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.956628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.957071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.957078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.957468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.957475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.957907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.957913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.958467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.958494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.958934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.958942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.959472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.959499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.959938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.959946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.960470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.960497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.960969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.960978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.961418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.961445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.961943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.948 [2024-07-25 10:18:30.961952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.948 qpair failed and we were unable to recover it. 00:29:51.948 [2024-07-25 10:18:30.962496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.962522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.962989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.962997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.963596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.963623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.964063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.964071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.964608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.964635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.965074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.965083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.965612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.965639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.966076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.966084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.966585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.966612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.967076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.967085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.967527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.967535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.967958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.967964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.968506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.968533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.969005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.969014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.969408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.969435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.969936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.969945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.970474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.970501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.970973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.970981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.971539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.971566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.972009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.972018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.972538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.972565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.973080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.973089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.973596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.973603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.974026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.974033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.974548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.974576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.975041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.975050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.975591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.975618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.976061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.976070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.976592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.976622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.977134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.977143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.977540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.977567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.978025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.978034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.978565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.978592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.979039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.979048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.979577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.979605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.980043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.980051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.980575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.949 [2024-07-25 10:18:30.980602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.949 qpair failed and we were unable to recover it. 00:29:51.949 [2024-07-25 10:18:30.981106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.981114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.981637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.981665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.982113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.982122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.982433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.982442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.982865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.982871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.983409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.983436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.983882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.983890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.984325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.984333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.984845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.984852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.985179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.985186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.985647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.985654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.986073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.986081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.986629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.986657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.987123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.987132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.987674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.987701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.988145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.988153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.988691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.988718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.988937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.988948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.989508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.989535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.989978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.989986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.990560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.990587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.991054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.991062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.991608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.991635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.991851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.991862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.992317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.992326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.992775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.992782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.993215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.993222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.993736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.993742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.994173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.994181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.994651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.994658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.995122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.995129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.995574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.995584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.996005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.996013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.996572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.950 [2024-07-25 10:18:30.996599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.950 qpair failed and we were unable to recover it. 00:29:51.950 [2024-07-25 10:18:30.997105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.997113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:30.997455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.997463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:30.997929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.997936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:30.998461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.998488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:30.998953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.998962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:30.999504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.999531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:30.999978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:30.999987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.000585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.000612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.001064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.001073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.001603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.001630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.002075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.002084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.002418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.002426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.002836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.002843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.003277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.003285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.003725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.003732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.004160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.004167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.004634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.004641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.005115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.005123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.005563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.005571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.005993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.006001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.006568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.006595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.007036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.007044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.007594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.007621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.007962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.007971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.008535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.008562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.008999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.009007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.009542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.009569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.009986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.009995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.010540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.010567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.011089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.011097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.011545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.011552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.011982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.011989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.012515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.012542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.012981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.951 [2024-07-25 10:18:31.012989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.951 qpair failed and we were unable to recover it. 00:29:51.951 [2024-07-25 10:18:31.013510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.013537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.013954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.013963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.014514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.014541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.014980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.014991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.015553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.015581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.016046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.016054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.016576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.016603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.017041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.017049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.017567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.017594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.018061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.018070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.018590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.018617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.019056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.019065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.019600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.019627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.020091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.020100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.020552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.020580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.021033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.021042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.021600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.021627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.022101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.022110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.022549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.022556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.023007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.023014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.023557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.023584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.024050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.024058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.024606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.024634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.025074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.025082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.025602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.025629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.026000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.026009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.026568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.952 [2024-07-25 10:18:31.026595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.952 qpair failed and we were unable to recover it. 00:29:51.952 [2024-07-25 10:18:31.027038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.027047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.027542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.027570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.028035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.028043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.028483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.028510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.028953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.028962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.029544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.029571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.030045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.030053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.030565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.030593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.031044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.031052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.031592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.031620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.032061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.032070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.032587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.032614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.033062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.033071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.033647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.033674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.033908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.033917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.034446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.034473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.034912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.034923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.035480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.035507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.035972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.035980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.036567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.036595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.037032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.037040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.037562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.037590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.038057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.038067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.038631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.038657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.039101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.039109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.039606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.039633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.040136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.040144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.040528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.040536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.040759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.040770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.041219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.041227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.041680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.041687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.042115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.953 [2024-07-25 10:18:31.042122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.953 qpair failed and we were unable to recover it. 00:29:51.953 [2024-07-25 10:18:31.042570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.042578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.043020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.043027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.043374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.043381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.043801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.043807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.044228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.044235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.044660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.044667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.045096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.045102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.045586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.045593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.046048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.046055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.046573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.046600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.047066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.047076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.047628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.047655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.048098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.048107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.048511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.048520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.048876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.048883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.049460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.049487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.049927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.049935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.050489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.050516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.050992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.051000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.051550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.051577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.052015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.052023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.052465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.052491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.052938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.052948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.053473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.053500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.053938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.053951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.054476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.054503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.055000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.055009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.055553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.055580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.056008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.056017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.056420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.056447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.056890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.056899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.057115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.057126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.057285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.057295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.057861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.057868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.058364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.058371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.954 [2024-07-25 10:18:31.058803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.954 [2024-07-25 10:18:31.058810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.954 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.059230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.059237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.059723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.059731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.060218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.060226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.060641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.060647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.061075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.061081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.061289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.061298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:51.955 [2024-07-25 10:18:31.061784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.955 [2024-07-25 10:18:31.061792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:51.955 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.062224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.062233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.062682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.062689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.063110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.063116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.063561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.063568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.064001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.064008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.064431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.064439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.064873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.064881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.065358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.065365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.065808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.065814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.066146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.066152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.066585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.066592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.067055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.067061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.067591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.067618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.068095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.068103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.068352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.068360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.068789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.068796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.069219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.069226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.069706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.069712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.070133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.070141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.070579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.070586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.071010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.071016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.071454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.071485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.071938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.071947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.072524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.072550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.073019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.073028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.073581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.073608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.074047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.074055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.074486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.074513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.074944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.074952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.075486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.075513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.075953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.075961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.076499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.076526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.224 [2024-07-25 10:18:31.076992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.224 [2024-07-25 10:18:31.077001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.224 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.077521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.077548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.078001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.078010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.078577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.078605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.079049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.079057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.079590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.079618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.080058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.080067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.080602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.080630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.081068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.081077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.081631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.081659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.082097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.082106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.082692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.082719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.083190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.083199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.083720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.083747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.084186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.084194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.084716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.084743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.085361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.085388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.085855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.085864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.086223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.086239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.086566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.086584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.087022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.087029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.087470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.087477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.087902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.087909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.088461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.088487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.088953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.088961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.089518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.089545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.089906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.089914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.090352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.090359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.090835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.090842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.091191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.091206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.091551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.091558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.091988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.091995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.092510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.092537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.092755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.092766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.092960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.092969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.093434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.093442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.093906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.225 [2024-07-25 10:18:31.093913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.225 qpair failed and we were unable to recover it. 00:29:52.225 [2024-07-25 10:18:31.094339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.094346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.094818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.094825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.095036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.095046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.095503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.095510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.095946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.095953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.096499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.096526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.096909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.096917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.097345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.097352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.097795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.097801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.098227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.098234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.098656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.098663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.099096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.099103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.099459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.099465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.099830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.099838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.100280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.100287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.100721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.100727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.101159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.101166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.101612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.101619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.102065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.102072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.102656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.102683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.103133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.103141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.103585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.103593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.104020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.104026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.104571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.104599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.105065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.105074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.105619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.105646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.106085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.106094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.106632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.106660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.107169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.107178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.107721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.107748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.108096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.108105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.108535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.108542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.108989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.108998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.109548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.109575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.109889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.109899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.110463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.110490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.110957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.110966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.226 [2024-07-25 10:18:31.111490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.226 [2024-07-25 10:18:31.111517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.226 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.111957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.111966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.112439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.112466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.112932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.112940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.113486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.113513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.114056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.114065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.114611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.114638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.114988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.114996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.115547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.115574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.116013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.116021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.116576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.116603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.117123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.117131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.117603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.117630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.118072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.118080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.118618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.118646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.118862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.118873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.119234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.119242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.119659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.119665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.119978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.119984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.120442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.120449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.120899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.120905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.121297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.121304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.121776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.121783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.122213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.122221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.122673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.122679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.123106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.123112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.123566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.123573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.123999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.124006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.124449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.124456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.124884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.124891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.125443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.125470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.125943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.125952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.126489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.126517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.126869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.126877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.127386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.127393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.127825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.127835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.128263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.128270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.128600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.128606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.227 [2024-07-25 10:18:31.129077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.227 [2024-07-25 10:18:31.129084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.227 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.129506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.129513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.129935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.129942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.130468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.130496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.130842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.130850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.131295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.131303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.131743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.131751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.132228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.132235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.132666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.132673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.133099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.133105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.133550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.133557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.133895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.133902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.134368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.134375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.134674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.134683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.135117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.135124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.135563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.135570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.136012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.136019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.136482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.136488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.136916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.136922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.137443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.137470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.137945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.137953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.138408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.138436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.138875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.138884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.139308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.139316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.139773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.139781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.139939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.139947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.140455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.140462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.140915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.140922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.141415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.141422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.228 qpair failed and we were unable to recover it. 00:29:52.228 [2024-07-25 10:18:31.141821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.228 [2024-07-25 10:18:31.141828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.142293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.142300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.142734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.142741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.143039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.143047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.143519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.143526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.144012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.144018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.144570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.144597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.145069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.145077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.145656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.145688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.146125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.146134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.146718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.146745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.147416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.147443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.147755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.147765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.148207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.148215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.148694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.148701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.149127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.149134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.149709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.149736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.149952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.149962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.150406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.150414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.150842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.150848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.151060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.151070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.151520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.151527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.151952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.151959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.152415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.152442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.152909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.152917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.153462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.153489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.153998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.154006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.154523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.154550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.154896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.154905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.155496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.155523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.155977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.155986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.156586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.156613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.157105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.157114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.157561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.157569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.157924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.157931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.158502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.158529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.158918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.158926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.159431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.229 [2024-07-25 10:18:31.159458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.229 qpair failed and we were unable to recover it. 00:29:52.229 [2024-07-25 10:18:31.159958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.159967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.160490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.160517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.160987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.160995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.161556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.161583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.162030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.162039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.162618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.162645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.162999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.163008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.163552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.163579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.164033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.164041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.164488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.164515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.164987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.164998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.165551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.165578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.165968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.165977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.166500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.166527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.166948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.166956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.167498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.167524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.167973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.167981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.168423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.168450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.168955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.168963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.169497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.169525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.169969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.169977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.170526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.170553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.171060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.171069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.171606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.171633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.172074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.172084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.172617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.172644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.173113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.173122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.173644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.173671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.173964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.173978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.174541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.174568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.175079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.175088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.175553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.175561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.175988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.175994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.176553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.176579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.177084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.177093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.177531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.177539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.177967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.177974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.178535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.230 [2024-07-25 10:18:31.178563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.230 qpair failed and we were unable to recover it. 00:29:52.230 [2024-07-25 10:18:31.178916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.178925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.179506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.179532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.179791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.179800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.180160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.180167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.180641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.180648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.181086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.181093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.181545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.181552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.181894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.181900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.182373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.182381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.182824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.182831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.183277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.183284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.183576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.183583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.184062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.184069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.184496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.184503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.184940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.184947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.185502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.185529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.185872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.185881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.186324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.186332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.186772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.186778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.187255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.187262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.187714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.187720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.188148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.188154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.188536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.188542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.188754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.188761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.189231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.189238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.189678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.189686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.190124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.190131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.190478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.190486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.190954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.190962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.191318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.191325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.191772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.191778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.192175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.192181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.192497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.192505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.192944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.192951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.193286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.231 [2024-07-25 10:18:31.193293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.231 qpair failed and we were unable to recover it. 00:29:52.231 [2024-07-25 10:18:31.193599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.193605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.193770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.193776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.194235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.194242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.194688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.194694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.195126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.195135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.195633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.195640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.196060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.196067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.196413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.196421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.196880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.196886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.197310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.197317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.197390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.197401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.197852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.197859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.198276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.198283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.198751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.198758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.199225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.199233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.199725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.199732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.200150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.200156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.200597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.200603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.201038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.201045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.201252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.201262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.201592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.201599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.201802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.201810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.202234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.202241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.202765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.202772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.203220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.203227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.203638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.203646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.204115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.204122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.204545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.204553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.205069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.205075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.205508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.205515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.205940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.205946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.206541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.206568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.207014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.207023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.207551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.207578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.208048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.208056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.208587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.208614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.209112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.209120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.209649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.232 [2024-07-25 10:18:31.209676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.232 qpair failed and we were unable to recover it. 00:29:52.232 [2024-07-25 10:18:31.210114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.210123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.210442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.210450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.210792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.210799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.211263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.211270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.211743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.211750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.212168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.212175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.212625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.212635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.213098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.213104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.213595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.213602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.214031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.214037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.214570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.214598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.215068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.215077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.215623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.215649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.216092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.216101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.216682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.216709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.216870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.216881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.217292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.217300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.217513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.217523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.217972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.217979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.218458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.218465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.218894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.218901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.219331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.219338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.219771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.219777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.220203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.220211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.220410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.220419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.220967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.220973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.221390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.221397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.221695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.221703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.222176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.222182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.222660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.222666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.223085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.223092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.233 [2024-07-25 10:18:31.223554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.233 [2024-07-25 10:18:31.223561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.233 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.223987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.223994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.224490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.224517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.224953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.224961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.225470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.225497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.225923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.225931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.226471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.226498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.226936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.226944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.227511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.227538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.228006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.228017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.228551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.228578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.229068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.229077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.229600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.229627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.230059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.230067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.230552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.230580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.231027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.231038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.231485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.231512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.231987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.231996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.232533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.232561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.232997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.233006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.233498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.233525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.233965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.233974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.234422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.234458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.234895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.234904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.235466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.235494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.235958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.235967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.236485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.236512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.236955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.236963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.237482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.237509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.237954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.237964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.238502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.238529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.238967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.238975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.239503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.239530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.239996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.240005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.240448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.240475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.240914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.240922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.241446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.234 [2024-07-25 10:18:31.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.234 qpair failed and we were unable to recover it. 00:29:52.234 [2024-07-25 10:18:31.241946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.241955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.242401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.242427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.242868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.242876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.243111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.243117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.243602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.243609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.243965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.243972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.244423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.244429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.244779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.244785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.245268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.245275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.245701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.245708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.246041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.246048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.246503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.246510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.246929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.246935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.247473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.247500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.247717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.247728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.248186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.248193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.248537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.248544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.248730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.248740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.249152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.249162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.249600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.249607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.250034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.250041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.250503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.250510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.250989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.250995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.251512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.251539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.251757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.251768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.252135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.252143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.252589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.252597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.253066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.253073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.253628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.253654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.254094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.254103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.254558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.254565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.254996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.255003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.255599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.255626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.256064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.256072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.256583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.256610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.235 [2024-07-25 10:18:31.257094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.235 [2024-07-25 10:18:31.257102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.235 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.257646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.257673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.258116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.258126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.258585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.258593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.258945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.258952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.259505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.259532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.259968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.259977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.260496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.260522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.260964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.260972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.261473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.261500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.262020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.262028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.262558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.262585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.263025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.263033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.263576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.263603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.264041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.264049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.264641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.264668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.265133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.265141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.265476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.265503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.265983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.265992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.266537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.266564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.267068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.267077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.267607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.267634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.268073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.268081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.268684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.268714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.269193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.269207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.269662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.269689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.270134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.270143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.270612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.270639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.271080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.271088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.271504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.271513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.271953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.271959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.272469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.272496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.272930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.272938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.273439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.273466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.273905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.273914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.274463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.274490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.274958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.274966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.275529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.275556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.236 [2024-07-25 10:18:31.275998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.236 [2024-07-25 10:18:31.276006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.236 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.276527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.276554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.276888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.276897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.277468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.277495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.277941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.277949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.278468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.278495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.278963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.278972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1480370 Killed "${NVMF_APP[@]}" "$@" 00:29:52.237 [2024-07-25 10:18:31.279498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.279525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:52.237 [2024-07-25 10:18:31.279967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.279976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:52.237 [2024-07-25 10:18:31.280501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.280528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.237 [2024-07-25 10:18:31.281002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.281011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.237 [2024-07-25 10:18:31.281573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.281600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.282046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.282054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.282534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.282561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.283028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.283038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.283578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.283605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.284042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.284051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.284574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.284601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.285066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.285076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.285615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.285643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.286079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.286088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.286704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.286731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.287166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.287175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.287729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.287756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 [2024-07-25 10:18:31.288189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.288199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1481407 00:29:52.237 [2024-07-25 10:18:31.288649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.288676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1481407 00:29:52.237 [2024-07-25 10:18:31.289110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.289120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1481407 ']' 00:29:52.237 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.237 [2024-07-25 10:18:31.289637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.237 [2024-07-25 10:18:31.289664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.237 qpair failed and we were unable to recover it. 00:29:52.238 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:52.238 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.238 [2024-07-25 10:18:31.290100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.290111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:52.238 10:18:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.238 [2024-07-25 10:18:31.290555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.290566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.291037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.291046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.291536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.291567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.292004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.292012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.292582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.292609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.293061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.293069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.293581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.293609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.294046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.294056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.294610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.294638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.295078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.295087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.295611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.295639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.295984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.295992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.296514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.296541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.297010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.297019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.297553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.297580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.298102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.298110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.298552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.298562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.299007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.299014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.299536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.299563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.299788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.299799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.300209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.300218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.300644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.300652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.301123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.301129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.301605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.301632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.301850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.301861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.302319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.302327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.238 qpair failed and we were unable to recover it. 00:29:52.238 [2024-07-25 10:18:31.302793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.238 [2024-07-25 10:18:31.302800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.303232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.303240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.303424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.303434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.303903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.303911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.304256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.304264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.304719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.304725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.305066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.305642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.305649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.306072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.306078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.306525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.306552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.306990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.306998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.307569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.307596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.308065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.308074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.308591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.308619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.309070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.309080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.309618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.309645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.310093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.310105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.310427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.310435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.310986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.310992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.311538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.311566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.311912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.311921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.312483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.312509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.312968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.312977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.313526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.313553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.314038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.314046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.314605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.314632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.314988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.314997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.315538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.315566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.315920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.315929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.316487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.316514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.316983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.316992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.317565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.317592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.317817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.317828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.318145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.318152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.318652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.318660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.319148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.319155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.319599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.319605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.320038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.239 [2024-07-25 10:18:31.320045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.239 qpair failed and we were unable to recover it. 00:29:52.239 [2024-07-25 10:18:31.320585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.320612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.321188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.321197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.321737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.321764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.322433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.322460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.322967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.322975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.323528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.323556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.324076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.324085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.324536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.324544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.324998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.325005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.325535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.325562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.326030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.326038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.326581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.326608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.327054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.327063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.327603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.327630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.327983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.327991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.328603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.328630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.329086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.329094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.329637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.329664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.330141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.330153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.330624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.330632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.331066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.331072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.331612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.331640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.332078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.332087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.332702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.332729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.333172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.333181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.333629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.333656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.334130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.334139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.334640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.334648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.335110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.335117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.335666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.335694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.336174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.336183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.336737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.336765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.337081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.337092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.337554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.337562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.337995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.338002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.338462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.338489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.338939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.338949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.240 qpair failed and we were unable to recover it. 00:29:52.240 [2024-07-25 10:18:31.339161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.240 [2024-07-25 10:18:31.339168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.339633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.339641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.339713] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:52.241 [2024-07-25 10:18:31.339762] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.241 [2024-07-25 10:18:31.340075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.340084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.340535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.340542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.340993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.341001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.341384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.341415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.341849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.341859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.342225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.342242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.342697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.342705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.343152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.343160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.343624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.343634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.344090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.344098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.344535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.344544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.344992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.345001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.345550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.345579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.345933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.345943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.346472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.346502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.346957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.346967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.347534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.347562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.241 [2024-07-25 10:18:31.347997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.241 [2024-07-25 10:18:31.348008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.241 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.348579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.348609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.349122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.349132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.349586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.349595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.350073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.350082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.350537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.350566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.351026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.351036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.351629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.351658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.352119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.352130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.352488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.352517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.352992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.353003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.353577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.353606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.354059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.354070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.354505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.354514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.354959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.354971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.355529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.355557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.356020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.356029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.356582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.356611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.357110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.357120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.357568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.357577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.358051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.358060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.358609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.358637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.359095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.359105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.359648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.359676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.360144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.360154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.360600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.360610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.361104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.361112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.512 [2024-07-25 10:18:31.361510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.512 [2024-07-25 10:18:31.361538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.512 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.361969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.361980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.362546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.362575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.363074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.363085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.363553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.363562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.364011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.364019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.364589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.364617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.365065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.365075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.365637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.365666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.366109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.366118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.366641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.366670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.367012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.367022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.367241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.367258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.367682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.367691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.368003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.368013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.368238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.368250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.368670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.368679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.369122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.369130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.369597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.369606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.370049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.370058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.370620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.370649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.370878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.370889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.371344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.371352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.371801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.371809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.513 [2024-07-25 10:18:31.372111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.372120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.372440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.372449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.372891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.372898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.373342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.373354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.373792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.373800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.374266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.374274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.374729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.374737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.375084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.375092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.375523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.375532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.375968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.375977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.376403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.376412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.376886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.376895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.377329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.513 [2024-07-25 10:18:31.377338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.513 qpair failed and we were unable to recover it. 00:29:52.513 [2024-07-25 10:18:31.377783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.377792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.378102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.378110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.378580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.378588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.379024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.379032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.379595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.379624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.380081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.380090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.380321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.380330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.380763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.380771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.381249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.381258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.381695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.381703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.382176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.382184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.382667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.382675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.383143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.383151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.383616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.383625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.384071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.384079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.384483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.384492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.384841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.384849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.385294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.385303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.385747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.385754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.386178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.386186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.386488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.386498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.386940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.386947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.387403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.387411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.387843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.387851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.388397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.388425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.388883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.388893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.389347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.389356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.389835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.389843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.390302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.390310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.390767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.390776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.390993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.391004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.391459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.391467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.391911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.391919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.392363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.392371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.392620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.392628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.393104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.393112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.393574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.514 [2024-07-25 10:18:31.393582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.514 qpair failed and we were unable to recover it. 00:29:52.514 [2024-07-25 10:18:31.394013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.394021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.394471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.394479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.394961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.394969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.395515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.395543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.396013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.396023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.396503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.396532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.397006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.397016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.397612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.397640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.398097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.398107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.398580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.398589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.399098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.399106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.399624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.399653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.400105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.400115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.400573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.400581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.400893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.515 [2024-07-25 10:18:31.401025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.401033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.401546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.401574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.402033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.402042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.402589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.402618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.402934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.402943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.403536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.403564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.404111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.404120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.404521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.404530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.404970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.404978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.405522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.405550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.405910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.405964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.406509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.406537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.407014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.407024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.407606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.407634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.408091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.408101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.408560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.408569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.409057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.409065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.409499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.409526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.409978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.409988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.410482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.410511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.410877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.410887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.411209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.411219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.515 [2024-07-25 10:18:31.411655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.515 [2024-07-25 10:18:31.411663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.515 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.412124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.412132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.412649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.412677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.413140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.413150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.413718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.413747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.414206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.414216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.414647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.414656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.415103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.415111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.415642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.415671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.415892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.415904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.416221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.416235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.416702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.416710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.417244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.417253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.417566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.417573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.418066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.418075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.418505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.418512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.418961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.418969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.419500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.419528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.419979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.419989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.420217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.420229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.420431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.420442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.420902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.420911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.421370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.421379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.421838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.421845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.422148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.422157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.422612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.422621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.423071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.423080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.423650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.423680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.424051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.424062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.424553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.424581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.424891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.424901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.425463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.425492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.516 qpair failed and we were unable to recover it. 00:29:52.516 [2024-07-25 10:18:31.425947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.516 [2024-07-25 10:18:31.425957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.426507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.426535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.426886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.426896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.427351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.427359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.427791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.427799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.428246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.428255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.428688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.428696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.428973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.428980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.429425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.429433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.429652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.429660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.430086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.430094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.430419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.430427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.430900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.430908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.431425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.431433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.431902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.431911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.432358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.432367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.432821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.432829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.433288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.433296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.433588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.433597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.434044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.434052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.434495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.434504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.434814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.434822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.435307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.435316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.435771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.435780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.436117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.436124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.436549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.436557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.437040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.437049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.437628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.437657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.437942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.437951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.438453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.438462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.438940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.438948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.439449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.439477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.439758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.439768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.440218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.440227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.440711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.440719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.517 qpair failed and we were unable to recover it. 00:29:52.517 [2024-07-25 10:18:31.441179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.517 [2024-07-25 10:18:31.441188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.441411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.441419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.441911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.441919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.442347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.442355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.442838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.442846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.443300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.443308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.443596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.443605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.443830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.443836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.444313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.444321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.444811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.444819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.445272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.445281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.445603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.445610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.446062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.446071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.446540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.446548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.446859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.446868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.447337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.447346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.447798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.447806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.448256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.448265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.448722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.448730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.449073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.449082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.449505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.449513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.450041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.450050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.450600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.450628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.451115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.451128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.451664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.451692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.452050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.452061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.452486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.452514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.452972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.452983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.453519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.453548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.453904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.453915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.454470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.454500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.455001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.455012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.455229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.455242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.455303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.518 [2024-07-25 10:18:31.455326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.518 [2024-07-25 10:18:31.455332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.518 [2024-07-25 10:18:31.455336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.518 [2024-07-25 10:18:31.455340] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.518 [2024-07-25 10:18:31.455382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:29:52.518 [2024-07-25 10:18:31.455541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:29:52.518 [2024-07-25 10:18:31.455680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.455680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:29:52.518 [2024-07-25 10:18:31.455690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.518 qpair failed and we were unable to recover it. 00:29:52.518 [2024-07-25 10:18:31.455683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:29:52.518 [2024-07-25 10:18:31.455984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.518 [2024-07-25 10:18:31.455993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.456092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.456102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.456587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.456596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.456956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.456964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.457451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.457459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.457895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.457903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.458351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.458359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.458811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.458819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.459038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.459048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.459393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.459401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.459848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.459856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.460303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.460311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.460810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.460818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.461259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.461268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.461620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.461628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.461971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.461980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.462478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.462487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.462923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.462931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.463381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.463390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.463843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.463852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.464326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.464335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.464802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.464810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.465260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.465268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.465754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.465763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.466265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.466275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.466729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.466737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.467185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.467194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.467535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.467544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.467991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.468000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.468533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.468564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.469029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.469038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.469587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.469616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.470157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.470167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.470442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.470471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.470726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.470736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.471136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.471145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.471276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.471296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.471575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.519 [2024-07-25 10:18:31.471583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.519 qpair failed and we were unable to recover it. 00:29:52.519 [2024-07-25 10:18:31.472033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.472041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.472487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.472496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.472950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.472958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.473427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.473436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.473744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.473754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.474216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.474226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.474652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.474660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.475127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.475135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.475575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.475584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.476031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.476039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.476391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.476420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.476882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.476891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.477438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.477467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.477968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.477978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.478552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.478581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.478842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.478852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.479314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.479323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.479764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.479772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.480081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.480090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.480491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.480500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.480809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.480818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.481163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.481172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.481651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.481659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.482125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.482134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.482582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.482590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.482934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.482942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.483401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.483410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.483896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.483904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.484459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.484491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.484849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.484859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.485312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.485321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.485767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.485775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.486224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.486232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.486684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.486693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.487136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.487145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.487379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.487387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.487828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.487836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.520 [2024-07-25 10:18:31.488278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.520 [2024-07-25 10:18:31.488286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.520 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.488750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.488758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.489113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.489120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.489586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.489594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.490042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.490051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.490485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.490493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.490973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.490981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.491561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.491589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.492045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.492055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.492434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.492463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.492937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.492946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.493516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.493544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.493858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.493868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.494401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.494410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.494847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.494855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.495398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.495427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.495890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.495900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.496355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.496364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.496846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.496854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.497304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.497313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.497551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.497559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.498007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.498015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.498453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.498462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.498918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.498927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.499380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.499388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.499633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.499641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.500082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.500090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.500537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.500546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.500855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.500865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.501228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.501238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.501560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.521 [2024-07-25 10:18:31.501568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.521 qpair failed and we were unable to recover it. 00:29:52.521 [2024-07-25 10:18:31.502018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.502027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.502479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.502488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.502933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.502941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.503255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.503265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.503723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.503731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.504187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.504195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.504419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.504432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.504933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.504941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.505379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.505387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.505908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.505916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.506120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.506130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.506582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.506590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.507038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.507046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.507570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.507599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.507950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.507960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.508187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.508195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.508291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.508303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.508783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.508791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.509011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.509021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.509372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.509380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.509599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.509608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.510016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.510023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.510464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.510473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.510924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.510932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.511397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.511522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.511529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.511967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.511975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.512253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.512262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.512711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.512719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.513165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.513173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.513397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.513406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.513869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.513876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.514343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.514351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.514797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.514805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.515253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.515261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.515706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.515714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.516178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.516185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.522 [2024-07-25 10:18:31.516409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.522 [2024-07-25 10:18:31.516416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.522 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.516940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.516948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.517385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.517394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.517727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.517737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.518205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.518213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.518674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.518682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.519123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.519131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.519572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.519580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.519939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.519947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.520486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.520514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.520970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.520979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.521546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.521575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.521836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.521846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.522287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.522295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.522546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.522554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.522986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.522995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.523430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.523438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.523878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.523886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.524329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.524338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.524792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.524800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.525252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.525261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.525711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.525719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.526065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.526073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.526423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.526431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.526890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.526898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.527209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.527217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.527663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.527671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.528139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.528147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.528599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.528608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.529045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.529053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.529504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.529533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.530009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.530019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.530613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.530642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.531104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.531114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.531565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.531574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.532004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.532013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.532549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.532578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.523 [2024-07-25 10:18:31.533027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.523 [2024-07-25 10:18:31.533037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.523 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.533596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.533624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.534068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.534077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.534683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.534711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.535165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.535175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.535719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.535747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.536412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.536444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.536909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.536919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.537149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.537157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.537476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.537506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.537947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.537957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.538506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.538535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.538981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.538990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.539391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.539419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.539890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.539900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.540442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.540471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.540925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.540935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.541175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.541183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.541653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.541662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.542111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.542119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.542365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.542374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.542584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.542591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.543034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.543042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.543492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.543500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.543947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.543956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.544494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.544522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.544963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.544973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.545546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.545575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.545824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.545834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.546319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.546328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.546785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.546794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.547245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.547254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.547697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.547706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.548153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.548162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.548670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.548679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.549124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.549135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.549586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.549595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.550082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.524 [2024-07-25 10:18:31.550090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.524 qpair failed and we were unable to recover it. 00:29:52.524 [2024-07-25 10:18:31.550552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.550560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.551076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.551084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.551609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.551637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.551741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.551752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.552122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.552132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.552575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.552584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.553031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.553039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.553485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.553494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.553710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.553725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.554184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.554192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.554633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.554642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.555176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.555185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.555422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.555451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.555904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.555914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.556366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.556374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.556698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.556706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.556930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.556942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.557394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.557403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.557510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.557517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.557744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.557751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.558077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.558086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.558407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.558415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.558863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.558871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.559111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.559119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.559309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.559317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.559532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.559541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.559631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.559640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.560046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.560054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.560500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.560508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.561007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.561015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.561462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.561470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.561913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.561921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.562449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.562459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.562661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.562672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.563136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.563143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.563599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.563609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.564056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.525 [2024-07-25 10:18:31.564064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.525 qpair failed and we were unable to recover it. 00:29:52.525 [2024-07-25 10:18:31.564369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.564379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.564848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.564856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.565306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.565315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.565652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.565661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.566100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.566108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.566417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.566425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.566879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.566887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.567320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.567328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.567614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.567623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.568058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.568066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.568310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.568318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.568553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.568562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.568787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.568795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.569262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.569271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.569709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.569717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.570053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.570061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.570525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.570533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.570967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.570975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.571283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.571292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.571748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.571756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.571999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.572007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.572450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.572458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.572906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.572915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.573361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.573370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.573838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.573846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.574294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.574302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.574580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.574589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.575034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.575041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.575484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.575493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.575936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.575945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.576398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.526 [2024-07-25 10:18:31.576428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.526 qpair failed and we were unable to recover it. 00:29:52.526 [2024-07-25 10:18:31.576881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.576891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.577359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.577368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.577758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.577766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.578213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.578222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.578544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.578553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.578893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.578902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.579134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.579142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.579461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.579470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.579909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.579917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.580393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.580401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.580748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.580756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.581100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.581108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.581358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.581366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.581802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.581810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.582253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.582262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.582699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.582707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.583151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.583160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.583666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.583674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.584164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.584173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.584609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.584618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.584958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.584968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.585516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.585544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.586006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.586016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.586545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.586574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.586836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.586846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.587331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.587340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.587424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.587430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.587639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.587647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.588084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.588092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.588555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.588564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.588805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.588813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.589259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.589268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.589711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.589719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.589807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.589814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.590256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.590265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.590712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.590721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.591101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.527 [2024-07-25 10:18:31.591109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.527 qpair failed and we were unable to recover it. 00:29:52.527 [2024-07-25 10:18:31.591568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.591576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.592046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.592053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.592481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.592490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.592934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.592942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.593386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.593395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.593862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.593870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.594323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.594331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.594757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.594765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.595212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.595221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.595563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.595571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.596013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.596022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.596556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.596585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.597047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.597056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.597461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.597489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.597942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.597952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.598585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.598614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.599070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.599080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.599487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.599516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.600023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.600033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.600583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.600612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.600830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.600841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.601300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.601309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.601544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.601555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.602015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.602023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.602469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.602478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.602942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.602950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.603173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.603180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.603630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.603639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.604090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.604098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.604554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.604562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.604997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.605005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.605386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.605414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.605883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.605894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.606489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.606517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.606973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.606982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.607530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.607559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.607742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.528 [2024-07-25 10:18:31.607754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.528 qpair failed and we were unable to recover it. 00:29:52.528 [2024-07-25 10:18:31.608247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.608257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.608569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.608578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.608801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.608812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.609230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.609239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.609755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.609763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.610212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.610221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.610666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.610675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.611134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.611143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.611592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.611600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.612052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.612061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.612604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.612633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.613092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.613102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.613338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.613347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.613749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.613760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.614205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.614214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.614527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.614535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.615005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.615012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.615576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.615605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.615849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.615858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.616161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.616170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.616402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.616410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.616868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.616877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.617325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.617333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.617798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.617806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.618251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.618259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.618707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.618715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.619159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.619167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.619415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.619423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.619898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.619906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.620351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.620359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.620806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.620813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.621259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.621267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.621745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.621753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.622206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.622215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.622644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.622652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.623102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.623110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.529 [2024-07-25 10:18:31.623580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.529 [2024-07-25 10:18:31.623589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.529 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.624024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.624033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.624577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.624605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.625061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.625071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.625613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.625642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.626099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.626109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.626725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.626754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.627213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.627224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.627553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.627561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.628011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.628019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.628468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.628476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.628929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.628938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.629409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.629438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.629894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.629903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.630445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.630473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.630926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.630936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.631499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.631527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.631989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.632002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.632464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.632493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.632950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.632960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.633528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.633556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.634008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.634018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.530 [2024-07-25 10:18:31.634589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.530 [2024-07-25 10:18:31.634619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.530 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.634973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.634984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.635521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.635551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.636006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.636016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.636559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.636588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.636940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.636951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.637524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.637552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.638009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.638018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.638403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.638432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.638890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.638901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.639473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.639501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.639814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.639825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.640052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.640060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.640295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.640303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.640770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.640777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.640998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.641006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.641237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.641246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.641665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.641673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.798 [2024-07-25 10:18:31.642150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.798 [2024-07-25 10:18:31.642158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.798 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.642465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.642475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.642809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.642817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.643279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.643287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.643742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.643750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.644195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.644206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.644663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.644672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.645108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.645115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.645575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.645583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.645864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.645872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.646315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.646324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.646771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.646780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.647246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.647254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.647497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.647504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.647806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.647814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.648263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.648272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.648728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.648736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.649179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.649189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.649493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.649503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.649932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.649940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.650411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.650419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.650637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.650648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.651095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.651103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.651326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.651334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.651767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.651776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.652021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.652028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.652459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.652468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.652914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.652922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.653163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.653171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.653623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.653631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.653709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.653718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.654119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.654128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.654566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.654574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.655042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.655050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.655260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.655270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.655727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.655735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.656180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.656189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.799 [2024-07-25 10:18:31.656655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.799 [2024-07-25 10:18:31.656664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.799 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.657110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.657118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.657340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.657349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.657790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.657798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.658264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.658273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.658730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.658739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.659189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.659196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.659701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.659709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.660012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.660019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.660458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.660487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.660942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.660952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.661554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.661583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.662063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.662073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.662616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.662644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.663096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.663105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.663574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.663583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.664101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.664110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.664674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.664702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.664956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.664965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.665499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.665528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.665988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.666003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.666430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.666459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.666918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.666929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.667487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.667516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.667748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.667758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.668224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.668234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.668691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.668700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.669145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.669153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.669383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.669391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.669694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.669701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.670145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.670154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.670464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.670473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.670934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.670943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.671193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.671209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.671580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.671588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.672034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.672042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.672477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.672505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.672958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.800 [2024-07-25 10:18:31.672969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.800 qpair failed and we were unable to recover it. 00:29:52.800 [2024-07-25 10:18:31.673455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.673484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.673941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.673951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.674529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.674558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.674943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.674953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.675500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.675526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.675990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.676000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.676576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.676605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.677063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.677073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.677655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.677685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.678119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.678129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.678589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.678618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.679073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.679083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.679401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.679419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.679772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.679780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.680252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.680260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.680718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.680728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.681086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.681095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.681571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.681580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.682020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.682029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.682475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.682483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.682923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.682931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.683400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.683429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.683874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.683889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.684345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.684354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.684810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.684818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.685067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.685075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.685509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.685518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.685882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.685890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.686207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.686218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.686683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.686691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.687157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.687166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.687535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.687564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.688010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.688019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.688571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.688600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.689083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.689093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.689565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.689574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.689808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.801 [2024-07-25 10:18:31.689815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.801 qpair failed and we were unable to recover it. 00:29:52.801 [2024-07-25 10:18:31.690026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.690035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.690484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.690492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.690946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.690954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.691419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.691447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.691916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.691926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.692483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.692512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.692971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.692981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.693533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.693561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.694099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.694109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.694569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.694578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.694805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.694813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.695151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.695608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.695616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.695861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.695868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.696093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.696101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.696528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.696536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.696840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.696849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.697288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.697297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.697746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.697754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.698204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.698213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.698663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.698671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.699145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.699153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.699401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.699409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.699869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.699877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.700324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.700333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.700643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.700652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.701092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.701100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.701561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.701569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.701880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.701889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.702361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.702369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.702817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.702825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.703180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.703189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.703636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.703644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.704108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.704116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.704578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.704587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.705032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.705039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.705585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.705613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.705875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.802 [2024-07-25 10:18:31.705887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.802 qpair failed and we were unable to recover it. 00:29:52.802 [2024-07-25 10:18:31.706100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.706108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.706592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.706601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.707052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.707060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.707442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.707472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.707682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.707694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.708009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.708017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.708354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.708363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.708716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.708724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.709025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.709034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.709482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.709490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.709837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.709845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.710310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.710319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.710781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.710790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.711235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.711244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.711705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.711714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.711998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.712006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.712445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.712453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.712902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.712910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.713132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.713139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.713599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.713607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.713839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.713846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.714297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.714305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.714751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.714759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.715225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.715234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.715694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.715702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.716010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.716017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.716471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.716479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.716949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.716958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.717403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.717412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.717856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.717864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.803 [2024-07-25 10:18:31.718428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.803 [2024-07-25 10:18:31.718456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.803 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.718930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.718939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.719506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.719534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.720035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.720044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.720591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.720620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.721051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.721061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.721600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.721629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.722082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.722091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.722564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.722573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.723047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.723055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.723612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.723641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.724112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.724121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.724666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.724694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.725164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.725174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.725723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.725752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.726198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.726213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.726755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.726783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.727412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.727441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.727897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.727907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.728463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.728492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.728950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.728960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.729520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.729549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.730008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.730018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.730457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.730486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.730940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.730950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.731498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.731527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.731837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.731847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.732343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.732352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.732801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.732809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.733288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.733296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.733638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.733647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.733963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.733971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.734215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.734223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.734452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.734461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.734896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.734904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.735336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.735345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.735577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.735585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.736004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.736015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.804 qpair failed and we were unable to recover it. 00:29:52.804 [2024-07-25 10:18:31.736465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.804 [2024-07-25 10:18:31.736473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.736589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.736595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.736918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.736927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.737380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.737388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.737835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.737843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.738293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.738301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.738510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.738517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.738946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.738954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.739397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.739405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.739848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.739856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.740347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.740355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.740837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.740845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.741158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.741166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.741506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.741515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.741828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.741837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.742082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.742090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.742553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.742561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.743004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.743012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.743375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.743384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.743616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.743624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.744096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.744105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.744436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.744444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.744885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.744894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.745362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.745370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.745817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.745825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.746270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.746279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.746629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.746638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.747072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.747079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.747521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.747529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.747753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.747765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.748092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.748100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.748547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.748555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.748632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.748640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.749046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.749054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.749368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.749377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.749817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.749825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.750287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.750295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.750740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.805 [2024-07-25 10:18:31.750748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.805 qpair failed and we were unable to recover it. 00:29:52.805 [2024-07-25 10:18:31.750964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.750973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.751428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.751439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.751655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.751664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.751910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.751917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.752133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.752142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.752585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.752594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.753060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.753068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.753503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.753511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.753766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.753773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.754206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.754214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.754637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.754645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.754858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.754867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.755070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.755079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.755496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.755505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.756010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.756018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.756552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.756580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.757041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.757051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.757593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.757622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.758057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.758067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.758603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.758631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.759074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.759085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.759550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.759579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.760090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.760101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.760537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.760545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.760995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.761003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.761544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.761572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.761818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.761828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.762281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.762290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.762756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.762765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.763208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.763217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.763471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.763480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.763927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.763935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.764386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.764395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.764839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.764848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.765321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.765329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.765766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.765774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.766217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.766226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.806 [2024-07-25 10:18:31.766432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.806 [2024-07-25 10:18:31.766440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.806 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.766879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.766887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.767334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.767343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.767426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.767433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.767864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.767876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.768320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.768329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.768758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.768766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.769210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.769218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.769663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.769671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.770116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.770125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.770560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.770569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.771005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.771014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.771451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.771460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.771907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.771915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.772391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.772400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.772868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.772876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.773323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.773331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.773771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.773779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.774250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.774258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.774595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.774603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.775058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.775066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.775504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.775512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.775979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.775988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.776531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.776560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.777026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.777035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.777525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.777554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.778027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.778037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.778584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.778613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.778965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.778975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.779511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.779540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.779858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.779869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.780310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.780319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.780774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.780782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.807 [2024-07-25 10:18:31.781227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.807 [2024-07-25 10:18:31.781236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.807 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.781673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.781681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.782034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.782042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.782489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.782497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.782942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.782950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.783484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.783513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.783821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.783831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.784079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.784087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.784515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.784524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.784755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.784762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.785215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.785224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.785436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.785446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.785919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.785927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.786391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.786399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.786846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.786854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.787287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.787295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.787521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.787528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.787951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.787959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.788401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.788410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.788547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.788554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.788969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.788976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.789341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.789351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.789798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.789806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.790255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.790263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.790573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.790582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.791013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.791020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.791247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.791254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.791754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.791761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.792064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.792071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.792507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.792515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.792742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.792749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.793198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.793211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.793695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.793702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.794129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.794137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.794222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.794229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.794662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.794669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.795116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.795124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.808 [2024-07-25 10:18:31.795331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.808 [2024-07-25 10:18:31.795343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.808 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.795768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.795777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.796003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.796011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.796139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.796150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.796582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.796591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.797067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.797075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.797524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.797532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.797975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.797984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.798520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.798549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.798796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.798805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.799255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.799263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.799721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.799730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.800173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.800181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.800492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.800501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.800744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.800755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.801104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.801112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.801578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.801586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.802070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.802080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.802547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.802556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.803004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.803012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.803430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.803458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.803892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.803902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.804450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.804479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.804939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.804949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.805535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.805564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.806039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.806049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.806594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.806623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.806846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.806855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.807372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.807381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.807845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.807853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.808315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.808324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.808552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.808560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.809019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.809026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.809495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.809503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.809883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.809892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.810340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.810348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.810803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.810811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.811054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.809 [2024-07-25 10:18:31.811061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.809 qpair failed and we were unable to recover it. 00:29:52.809 [2024-07-25 10:18:31.811512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.811521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.811970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.811978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.812519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.812548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.813045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.813057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.813606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.813634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.814086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.814097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.814569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.814578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.815052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.815061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.815434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.815463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.815915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.815925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.816466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.816495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.816965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.816975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.817437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.817465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.817929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.817938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.818488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.818517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.818991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.819001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.819574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.819603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.820064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.820074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.820598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.820627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.821091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.821101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.821512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.821540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.821999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.822009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.822540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.822569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.823039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.823049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.823608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.823638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.824101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.824112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.824592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.824600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.825077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.825086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.825639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.825648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.826084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.826092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.826627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.826655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.827123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.827133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.827594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.827603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.827858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.827866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.828211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.828220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.828599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.828608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.829053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.829061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.810 [2024-07-25 10:18:31.829417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.810 [2024-07-25 10:18:31.829445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.810 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.829903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.829913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.830448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.830476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.830696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.830707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.830947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.830956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.831307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.831316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.831761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.831772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.831990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.832001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.832217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.832234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.832679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.832693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.832912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.832923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.833114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.833124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.833599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.833609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.834055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.834064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.834512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.834521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.834902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.834910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.835356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.835364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.835829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.835838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.836157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.836166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.836634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.836644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.837093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.837102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.837437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.837447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.837871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.837880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.838230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.838239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.838700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.838708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.839160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.839169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.839635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.839645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.840089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.840098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.840347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.840356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.840826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.840835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.841305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.841314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.841751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.841760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.841983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.841992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.842331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.842340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.842671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.842680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.843127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.843136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.843450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.843459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.811 qpair failed and we were unable to recover it. 00:29:52.811 [2024-07-25 10:18:31.843915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.811 [2024-07-25 10:18:31.843924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.844347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.844356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.844798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.844806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.845162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.845170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.845379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.845388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.845602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.846045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.846053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.846401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.846409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.846877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.846885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.847368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.847379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.847613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.847622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.848068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.848076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.848406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.848415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.848641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.848649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.848861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.848869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.849315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.849323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.849781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.849789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.850257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.850266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.850711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.850719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.850940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.850948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.851298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.851306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.851649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.851656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.852103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.852112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.852582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.852590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.853036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.853045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.853484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.853492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.853937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.853945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.854472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.854500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.854809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.854820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.855302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.855310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.855820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.855828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.856323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.812 [2024-07-25 10:18:31.856332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.812 qpair failed and we were unable to recover it. 00:29:52.812 [2024-07-25 10:18:31.856774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.856782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.857247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.857256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.857607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.857615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.858066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.858074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.858530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.858539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.858888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.858896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.859103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.859115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.859307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.859316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.859620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.859628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.860093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.860101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.860569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.860576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.861027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.861036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.861356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.861364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.861828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.861836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.862285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.862294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.862744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.862752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.863197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.863208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.863709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.863721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.864158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.864167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.864410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.864418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.864897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.864906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.865499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.865528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.865988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.865998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.866535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.866563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.867023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.867033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.867578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.867607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.868063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.868074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.868601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.868630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.869085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.869096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.869647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.869675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.870133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.870143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.870584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.870594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.871040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.871049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.871472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.871501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.871814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.871824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.872393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.872422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.813 [2024-07-25 10:18:31.872883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.813 [2024-07-25 10:18:31.872893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.813 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.873368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.873378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.873686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.873695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.874140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.874149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.874600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.874608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.875084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.875092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.875556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.875564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.876007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.876017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.876405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.876433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.876853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.876863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.877310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.877319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.877569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.877577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.877967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.877974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.878282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.878292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.878781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.878790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.879243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.879253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.879479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.879487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.879692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.879706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.880047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.880056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.880492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.880501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.880945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.880954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.881177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.881215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.881685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.881693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.882151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.882159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.882402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.882411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.882850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.882858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.883217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.883226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.883675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.883683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.884126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.884134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.884574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.884582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.885037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.885046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.885628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.885657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.886111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.886122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.886581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.886589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.887048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.887058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.887647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.887676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.888135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.888144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.888560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.814 [2024-07-25 10:18:31.888589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.814 qpair failed and we were unable to recover it. 00:29:52.814 [2024-07-25 10:18:31.889048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.889058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.889479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.889508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.889967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.889977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.890510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.890539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.890993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.891004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.891434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.891463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.891717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.891727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.892166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.892175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.892611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.892620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.893065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.893073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.893434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.893463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.893894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.893904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.894463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.894492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.894946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.894956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.895421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.895450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.895675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.895684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.896125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.896583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.896591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.897043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.897052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.897612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.897640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.898089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.898101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.898574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.898583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.899037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.899045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.899497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.899529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.899975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.899986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.900521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.900550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.901005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.901015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.901429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.901458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.901915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.901926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.902160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.902168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.902612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.902621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.903097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.903106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.903576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.903585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.903834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.903843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.904041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.904050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.904505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.904514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.904948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.815 [2024-07-25 10:18:31.904956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.815 qpair failed and we were unable to recover it. 00:29:52.815 [2024-07-25 10:18:31.905398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.905427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.905883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.905893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.906462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.906490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.906946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.906956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.907518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.907547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.908003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.908013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.908487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.908517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.908973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.908983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.909533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.909561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.909998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.910009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.910573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.910601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.911056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.911066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.911543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.911572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.911820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.911829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.912276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.912284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.912711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.912719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.913166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.913175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.913612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.913621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.914086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.914095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.914560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.914568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.915015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.915024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.915569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.915598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.916033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.916043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.916592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.916620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.916866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.916875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.917459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.917488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.917929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.917943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.918398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.918407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.918875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.918884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.919456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.919485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.919923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.919934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.920482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.920511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.920867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.920876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.921342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.921352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.921794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.921803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.922296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.922304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.922530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.922538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.816 qpair failed and we were unable to recover it. 00:29:52.816 [2024-07-25 10:18:31.922990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.816 [2024-07-25 10:18:31.922998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.817 qpair failed and we were unable to recover it. 00:29:52.817 [2024-07-25 10:18:31.923452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.817 [2024-07-25 10:18:31.923460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.817 qpair failed and we were unable to recover it. 00:29:52.817 [2024-07-25 10:18:31.923911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.817 [2024-07-25 10:18:31.923920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.817 qpair failed and we were unable to recover it. 00:29:52.817 [2024-07-25 10:18:31.924008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.817 [2024-07-25 10:18:31.924015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.817 qpair failed and we were unable to recover it. 00:29:52.817 [2024-07-25 10:18:31.924466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.817 [2024-07-25 10:18:31.924475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:52.817 qpair failed and we were unable to recover it. 00:29:53.083 [2024-07-25 10:18:31.924964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.083 [2024-07-25 10:18:31.924975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.083 qpair failed and we were unable to recover it. 00:29:53.083 [2024-07-25 10:18:31.925444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.083 [2024-07-25 10:18:31.925453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.083 qpair failed and we were unable to recover it. 00:29:53.083 [2024-07-25 10:18:31.925753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.083 [2024-07-25 10:18:31.925760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.083 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.926082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.926090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.926487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.926497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.926933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.926942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.927390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.927400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.927658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.927667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.928126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.928135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.928673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.928681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.929127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.929136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.929407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.929416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.929851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.929859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.930323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.930331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.930550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.930562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.930876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.930885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.931118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.931127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.931351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.931362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.931840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.931849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.932322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.932330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.932755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.932763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.933230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.933239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.933694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.933702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.933947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.933954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.934395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.934406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.934676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.934684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.935005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.935014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.935470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.935478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.935928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.935937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.936376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.936384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.936845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.936853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.937298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.937306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.937748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.937756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.938232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.938240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.938586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.938594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.939052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.939059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.939535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.939543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.939985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.939993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.940557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.940586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.941032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.084 [2024-07-25 10:18:31.941043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.084 qpair failed and we were unable to recover it. 00:29:53.084 [2024-07-25 10:18:31.941589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.941617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.942072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.942082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.942435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.942463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.942699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.942710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.942930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.942937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.943183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.943191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.943392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.943401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.943666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.943674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.943917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.943925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.944401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.944410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.944722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.944730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.944983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.944991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.945493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.945502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.945976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.945984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.946435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.946444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.946668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.946676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.947136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.947144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.947516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.947524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.947977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.947986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.948451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.948459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.948673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.948681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.949105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.949113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.949575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.949583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.950023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.950031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.950386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.950396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.950690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.950700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.951162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.951171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.951481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.951490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.951933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.951941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.952106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.952115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.952605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.952615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.953054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.953062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.953373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.953381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.953847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.953855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.954300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.954309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.954761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.954769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.955222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.085 [2024-07-25 10:18:31.955231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.085 qpair failed and we were unable to recover it. 00:29:53.085 [2024-07-25 10:18:31.955630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.955638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.956082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.956091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.956316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.956324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.956776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.956785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.957218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.957227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.957452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.957466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.957555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.957563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.958021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.958030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.958502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.958512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.958754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.958763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.959219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.959230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.959683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.959692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.960165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.960173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.960397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.960405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.960620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.960630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.961087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.961095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.961309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.961318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.961739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.961747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.962194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.962214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.962675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.962684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.963142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.963150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.963615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.963624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.964065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.964073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.964595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.964624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.964870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.964881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.965336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.965346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.965792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.965801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.965906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.965918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.966367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.966376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.966855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.966864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.967299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.967308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.967753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.967761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.968228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.968237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.968684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.968693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.969008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.969017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.969368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.969378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.969861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.969868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.086 qpair failed and we were unable to recover it. 00:29:53.086 [2024-07-25 10:18:31.970116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.086 [2024-07-25 10:18:31.970125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.970575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.970583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.970997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.971005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.971351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.971360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.971605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.971613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.972011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.972020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.972465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.972474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.972939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.972947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.973421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.973450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.973759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.973770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.974205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.974213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.974643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.974652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.975101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.975110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.975580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.975589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.975672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.975683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faaa0000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Write completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 Read completed with error (sct=0, sc=8) 00:29:53.087 starting I/O failed 00:29:53.087 [2024-07-25 10:18:31.976423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.087 [2024-07-25 10:18:31.977049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.977090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.977604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.977692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.978238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.978278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.978810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.978839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.979441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.979530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.980068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.980104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.980561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.980593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.981089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.981120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.981608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.981638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.982016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.982045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.982522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.982553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.983048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.983076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.087 qpair failed and we were unable to recover it. 00:29:53.087 [2024-07-25 10:18:31.983462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.087 [2024-07-25 10:18:31.983493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.983993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.984022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.984351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.984381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.984878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.984907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.985443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.985473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.985951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.985980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.986242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.986271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.986459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.986491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.986994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.987023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.987364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.987396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.987872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.987902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.988150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.988178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.988679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.988708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.988981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.989011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.989462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.989491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.989744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.989773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.990292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.990322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.990806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.990834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.991324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.991371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.991646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.991677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.992059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.992089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.992346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.992375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.992846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.992874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.993009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.993043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.993402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.993432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.993926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.993954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.994467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.994497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.994991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.995021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.995513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.995542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.996021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.996051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.996514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.088 [2024-07-25 10:18:31.996545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.088 qpair failed and we were unable to recover it. 00:29:53.088 [2024-07-25 10:18:31.997037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:31.997066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:31.997341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:31.997371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:31.997869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:31.997898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:31.998419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:31.998449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:31.998984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:31.999012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:31.999509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:31.999538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.000019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.000050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.000343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.000373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.000863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.000891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.001280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.001309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.001760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.001790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.002271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.002300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.002752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.002781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.003278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.003308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.003786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.003814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.004305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.004334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.004829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.004858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.005354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.005383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.005643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.005672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.006030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.006060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.006328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.006357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.006847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.006875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.007428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.007458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.007937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.007965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.008463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.008493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.008985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.009013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.009494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.009523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.010005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.010034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.010603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.010691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.011109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.011146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.011675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.011709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.012177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.012217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.012593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.012641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.013134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.013164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.013630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.013660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.014137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.014166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.014647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.014676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.015141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.089 [2024-07-25 10:18:32.015170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.089 qpair failed and we were unable to recover it. 00:29:53.089 [2024-07-25 10:18:32.015433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.015464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.015943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.015972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.016565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.016649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.017266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.017317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.017613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.017641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.018111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.018138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.018635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.018664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.019036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.019070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.019592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.019623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.020120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.020152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.020630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.020660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.021158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.021187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.021490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.021520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.021885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.021920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.022299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.022329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.022698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.022730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.023220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.023251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.023629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.023662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.024140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.024169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.024697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.024728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.025226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.025258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.025654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.025685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.026194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.026237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.026629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.026658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.027133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.027163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.027646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.027677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.027933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.027963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.028340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.028372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.028842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.028871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.029357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.029388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.029897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.029926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.030427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.030456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.030841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.030870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.031354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.031384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.031902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.031938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.032469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.032500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.032978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.033007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.033511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.090 [2024-07-25 10:18:32.033601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.090 qpair failed and we were unable to recover it. 00:29:53.090 [2024-07-25 10:18:32.034215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.034256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.034776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.034806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.035163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.035193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.035670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.035699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.036186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.036239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.036608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.036637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.037164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.037193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.037542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.037571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.038071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.038101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.038675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.038764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.039461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.039549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.040093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.040130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.040594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.040626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.041122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.041151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.041629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.041659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.042143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.042172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.042655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.043178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.043215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.043350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.043378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.043850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.043879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.044380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.044413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.044662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.044689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.044963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.044992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.045497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.045528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.046011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.046040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.046172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.046199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.046685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.046714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.047208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.047241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.047527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.047556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.048054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.048082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.048332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.048363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.048848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.048877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.049330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.049359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.049609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.049636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.050118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.050146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.050641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.050672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.051166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.051209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.091 [2024-07-25 10:18:32.051469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.091 [2024-07-25 10:18:32.051498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.091 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.051973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.052002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.052487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.052517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.052894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.052923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.053416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.053444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.053922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.053951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.054358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.054388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.054842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.054870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.055215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.055245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.055730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.055759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.056255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.056296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.056799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.056828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.057375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.057405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.057892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.057921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.058403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.058433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.058880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.058909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.059413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.059443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.059919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.059948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.060429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.060459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.060739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.060768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.061133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.061161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.061660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.061689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.062219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.062248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.062611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.062639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.063137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.063165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.063687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.063718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.063968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.063999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.064479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.064569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.064977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.065014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.065490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.065522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.065769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.065798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.066296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.066325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.066825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.066854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.067337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.067370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.067854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.067883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.068383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.068413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.068908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.068938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.069427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.069456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.092 [2024-07-25 10:18:32.069938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.092 [2024-07-25 10:18:32.069967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.092 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.070461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.070501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.070988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.071016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.071581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.071673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.072225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.072264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.072772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.072803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.073073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.073103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.073460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.073499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.073977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.074007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.074496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.074525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.074884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.074917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.075433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.075463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.075837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.075873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.076251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.076287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.076775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.076806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.077295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.077327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.077810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.077839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.078338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.078368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.078859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.078887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.079254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.079284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.079845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.079874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.080374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.080405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.080907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.080935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.081453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.081482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.081754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.081783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.082282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.082312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.082446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.082473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.082953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.082983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.083262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.083293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.083812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.083840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.084347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.084378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.084866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.084896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.085459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.093 [2024-07-25 10:18:32.085490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.093 qpair failed and we were unable to recover it. 00:29:53.093 [2024-07-25 10:18:32.085988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.086017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.086360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.086390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.086873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.086901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.087218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.087248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.087761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.087789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.088395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.088488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.089049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.089088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.089590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.089624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.090113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.090154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.090664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.090696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.091219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.091249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.091731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.091761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.092440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.092530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.093122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.093159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.093679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.093710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.094188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.094226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.094463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.094492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.094958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.094986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.095257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.095298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.095791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.095821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.096334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.096363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.096832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.096861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.097352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.097384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.097861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.097891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.098385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.098415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.098914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.098943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.099413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.099443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.099848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.099881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.100371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.100401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.100907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.100937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.101412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.101441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.101923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.102215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.102244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.102544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.102574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.103067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.103097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.103584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.103616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.103895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.103923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.094 [2024-07-25 10:18:32.104417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.094 [2024-07-25 10:18:32.104448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.094 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.104933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.104962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.105452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.105482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.105983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.106012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.106469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.106557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.106892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.106930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.107434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.107467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.107853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.107887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.108350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.108381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.108870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.108899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.109382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.109412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.109784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.109823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.110193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.110232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.110728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.110758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.111442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.111533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.112119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.112155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.112741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.112772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.113402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.113490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.114041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.114079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.114609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.114642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.115133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.115164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.115681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.115712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.116091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.116125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.116331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.116361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.116632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.116661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.117177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.117215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.117563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.117592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.118100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.118129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.118406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.118435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.118910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.118939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.119426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.119456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.119956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.119985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.120482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.120514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.120852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.120881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.121533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.121564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.122054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.122083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.122565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.122595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.122872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.122901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.095 qpair failed and we were unable to recover it. 00:29:53.095 [2024-07-25 10:18:32.123415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.095 [2024-07-25 10:18:32.123445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.123828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.123856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.124357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.124389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.124755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.124784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.125270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.125300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.125762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.125790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.126297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.126325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.126674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.126702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.127085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.127126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.127642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.127673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.128144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.128173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.128665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.128696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.128965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.128995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.129497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.129535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.130064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.130092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.130580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.130610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.130975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.131008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.131258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.131288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.131802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.131831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.132314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.132344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.132832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.132861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.133243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.133279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.133797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.133827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.134314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.134344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.134626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.134655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.135162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.135191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.135676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.135707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.136190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.136228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.136694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.136723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.137169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.137207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.137668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.137697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.137980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.138010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.138514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.138605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.139022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.139057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.139556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.139588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.140072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.140101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.140603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.140637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 [2024-07-25 10:18:32.141014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.096 [2024-07-25 10:18:32.141043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.096 qpair failed and we were unable to recover it. 00:29:53.096 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:53.096 [2024-07-25 10:18:32.141321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.141352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:29:53.097 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.097 [2024-07-25 10:18:32.141831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.141862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:53.097 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.097 [2024-07-25 10:18:32.142333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.142362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.142875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.142906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.143180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.143220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.143704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.143734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.144075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.144104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.144569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.144599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.145099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.145127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.145629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.145660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.146141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.146170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.146737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.146767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.147171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.147199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.147777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.147817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.148395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.148485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.149031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.149069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.149549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.149582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.150049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.150079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.150332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.150362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.150867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.150897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.151346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.151376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.151875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.151907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.152395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.152425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.152684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.152712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.153096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.153125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.153698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.153728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.153863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.153892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.154367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.154398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.154902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.154933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.155421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.155453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.097 qpair failed and we were unable to recover it. 00:29:53.097 [2024-07-25 10:18:32.155613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.097 [2024-07-25 10:18:32.155641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.155887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.155916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.156417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.156448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.156931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.156961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.157239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.157269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.157812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.157840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.158334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.158365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.158861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.158891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.159024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.159051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.159321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.159352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.159848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.159878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.160364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.160395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.160877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.160906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.161446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.161476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.161962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.161990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.162477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.162508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.162843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.163375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.163405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.163863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.163893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.164374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.164404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.164898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.164926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.165380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.165412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.165775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.165804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.166290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.166325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.166852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.166881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.167380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.167410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.167905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.167935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.168402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.168433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.168918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.168948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.169445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.169475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.169978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.170007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.170578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.170670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.170996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.171039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.171528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.171560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.171933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.171962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.172309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.172343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.172827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.172857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.173364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.098 [2024-07-25 10:18:32.173396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.098 qpair failed and we were unable to recover it. 00:29:53.098 [2024-07-25 10:18:32.173898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.173928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.174411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.174440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.174926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.174957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.175238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.175267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.175641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.175670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.176170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.176230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.176727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.176757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.177255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.177284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.177671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.177700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.178189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.178240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.178634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.178667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.179182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.179223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.179601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.179631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.180117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.180146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.180533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.180563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.181017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.181048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.181333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.181363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.181726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.181755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.182180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.182218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.182707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.182735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.183241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.183271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.183526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.183554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.099 [2024-07-25 10:18:32.183924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.183953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.099 [2024-07-25 10:18:32.184450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.184480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.099 [2024-07-25 10:18:32.184751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.184783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.099 [2024-07-25 10:18:32.185110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.185139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.185650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.185680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.186176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.186225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.186631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.186671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.187190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.187233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.187739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.187768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.188267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.188298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.188793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.188822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.189191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.189230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.189719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.189748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.190196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.099 [2024-07-25 10:18:32.190242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.099 qpair failed and we were unable to recover it. 00:29:53.099 [2024-07-25 10:18:32.190610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.190639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.191125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.191155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.191623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.191654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.192160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.192189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.192677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.192707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.193185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.193222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.193740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.193769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.194268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.194298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.194663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.194691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.195058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.195088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.195371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.195400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.195915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.195944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.196417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.196447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.197013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.197041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.197536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.197572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.198063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.198091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.198698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.198728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.199222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.199252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.199789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.199819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.200328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.200358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.200857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.200909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 Malloc0 00:29:53.100 [2024-07-25 10:18:32.201447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.201486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.202009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.202039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.100 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:53.100 [2024-07-25 10:18:32.202517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.202548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.100 [2024-07-25 10:18:32.203045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.100 [2024-07-25 10:18:32.203074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.203585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.203645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.204220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.204285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.204727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.204785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.205364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.205418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.205962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.206009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.206587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.206680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.207240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.207279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.207594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.207625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.208102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.208132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.208632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.208663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.100 [2024-07-25 10:18:32.208798] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.100 [2024-07-25 10:18:32.209163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.100 [2024-07-25 10:18:32.209191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.100 qpair failed and we were unable to recover it. 00:29:53.101 [2024-07-25 10:18:32.209514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.101 [2024-07-25 10:18:32.209545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.101 qpair failed and we were unable to recover it. 00:29:53.101 [2024-07-25 10:18:32.209797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.101 [2024-07-25 10:18:32.209827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.101 qpair failed and we were unable to recover it. 00:29:53.101 [2024-07-25 10:18:32.210125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.101 [2024-07-25 10:18:32.210154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.101 qpair failed and we were unable to recover it. 00:29:53.101 [2024-07-25 10:18:32.210494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.101 [2024-07-25 10:18:32.210526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.101 qpair failed and we were unable to recover it. 00:29:53.101 [2024-07-25 10:18:32.211026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.101 [2024-07-25 10:18:32.211055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.101 qpair failed and we were unable to recover it. 00:29:53.101 [2024-07-25 10:18:32.211428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.101 [2024-07-25 10:18:32.211463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.211939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.211970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.212416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.212446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.212709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.212738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.213221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.213252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.213742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.213771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.214245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.214275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.214780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.214809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.215123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.215153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.215667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.215696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.215957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.215985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.216514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.216546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.364 qpair failed and we were unable to recover it. 00:29:53.364 [2024-07-25 10:18:32.217035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.364 [2024-07-25 10:18:32.217078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.217601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.217632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.365 [2024-07-25 10:18:32.218122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.365 [2024-07-25 10:18:32.218152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.365 [2024-07-25 10:18:32.218772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.218805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.219308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.219374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.219924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.219984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.220413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.220479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.221064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.221121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.221665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.221729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.222257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.222291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.222776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.222806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.223313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.223344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.223845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.223875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.224364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.224395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.224901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.224929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.225316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.225345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.225908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.225937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.226422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.226454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.226827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.226856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.227354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.227384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.227872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.227902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.228136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.228164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.228570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.228599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.228974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.229014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.229380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.229412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.229660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.229689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.365 [2024-07-25 10:18:32.230192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.230239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.365 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.365 [2024-07-25 10:18:32.230731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.230762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.231258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.231322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.231756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.231814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.232226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.232281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.232824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.233390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.233430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.233910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.365 [2024-07-25 10:18:32.233941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.365 qpair failed and we were unable to recover it. 00:29:53.365 [2024-07-25 10:18:32.234444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.234477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.234857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.234906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.235389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.235421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.235910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.235939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.236443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.236474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.236955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.236984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.237565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.237660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.238098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.238142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.238467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.238501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.239030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.239060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.239547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.239581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.240063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.240092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.240584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.240614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.240965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.241016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.241547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.241579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.366 [2024-07-25 10:18:32.242071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.242102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.366 [2024-07-25 10:18:32.242585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.242616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.366 [2024-07-25 10:18:32.243132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.243183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.243601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.243662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.243985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.244589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.244648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.245232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.245291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.245837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.245878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.246351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.246384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.246895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.246925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.247312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.247347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.247830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.247862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.248351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.248381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.248765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.248795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 [2024-07-25 10:18:32.249178] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.366 [2024-07-25 10:18:32.249313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.366 [2024-07-25 10:18:32.249343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7faa98000b90 with addr=10.0.0.2, port=4420 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:53.366 [2024-07-25 10:18:32.259571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.366 [2024-07-25 10:18:32.259751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.366 [2024-07-25 10:18:32.259803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.366 [2024-07-25 10:18:32.259825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.366 [2024-07-25 10:18:32.259847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.366 [2024-07-25 10:18:32.259903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.366 qpair failed and we were unable to recover it. 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:53.366 10:18:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1480716 00:29:53.366 [2024-07-25 10:18:32.269585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.269777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.269817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.269834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.269850] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.269889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.279517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.279662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.279693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.279704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.279714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.279742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.289395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.289514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.289540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.289549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.289555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.289577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.299507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.299634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.299657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.299666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.299673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.299693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.309505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.309621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.309646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.309655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.309662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.309682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.319561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.319677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.319701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.319710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.319722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.319743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.329559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.329681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.329709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.329721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.329728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.329751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.339589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.339707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.339734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.339743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.339750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.339772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.349651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.349767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.349792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.349802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.349809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.349830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.359671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.359792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.359818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.359826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.359833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.359855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.369715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.369828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.369855] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.369865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.369872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.369893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.379749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.379995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.380022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.380031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.380038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.380060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.389791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.389905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.367 [2024-07-25 10:18:32.389932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.367 [2024-07-25 10:18:32.389941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.367 [2024-07-25 10:18:32.389949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.367 [2024-07-25 10:18:32.389970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.367 qpair failed and we were unable to recover it. 00:29:53.367 [2024-07-25 10:18:32.399797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.367 [2024-07-25 10:18:32.399921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.399947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.399957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.399963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.399984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.409860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.409988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.410018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.410035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.410042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.410065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.419766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.419904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.419933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.419943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.419950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.419973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.429915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.430030] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.430059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.430069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.430076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.430099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.439979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.440126] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.440154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.440163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.440170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.440193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.449976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.450109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.450138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.450148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.450155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.450178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.460012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.460144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.460174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.460183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.460190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.460220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.470188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.470326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.470356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.470367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.470374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.470398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.480126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.480276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.480305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.480314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.480321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.480343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.368 [2024-07-25 10:18:32.490141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.368 [2024-07-25 10:18:32.490304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.368 [2024-07-25 10:18:32.490334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.368 [2024-07-25 10:18:32.490343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.368 [2024-07-25 10:18:32.490350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.368 [2024-07-25 10:18:32.490374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.368 qpair failed and we were unable to recover it. 00:29:53.630 [2024-07-25 10:18:32.500256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.630 [2024-07-25 10:18:32.500383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.630 [2024-07-25 10:18:32.500420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.630 [2024-07-25 10:18:32.500430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.630 [2024-07-25 10:18:32.500437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.630 [2024-07-25 10:18:32.500460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.630 qpair failed and we were unable to recover it. 00:29:53.630 [2024-07-25 10:18:32.510158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.630 [2024-07-25 10:18:32.510280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.630 [2024-07-25 10:18:32.510310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.630 [2024-07-25 10:18:32.510320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.630 [2024-07-25 10:18:32.510327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.630 [2024-07-25 10:18:32.510350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.630 qpair failed and we were unable to recover it. 00:29:53.630 [2024-07-25 10:18:32.520153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.630 [2024-07-25 10:18:32.520279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.630 [2024-07-25 10:18:32.520309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.630 [2024-07-25 10:18:32.520319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.630 [2024-07-25 10:18:32.520327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.630 [2024-07-25 10:18:32.520350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.630 qpair failed and we were unable to recover it. 00:29:53.630 [2024-07-25 10:18:32.530210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.630 [2024-07-25 10:18:32.530329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.630 [2024-07-25 10:18:32.530359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.630 [2024-07-25 10:18:32.530368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.630 [2024-07-25 10:18:32.530376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.630 [2024-07-25 10:18:32.530400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.630 qpair failed and we were unable to recover it. 00:29:53.630 [2024-07-25 10:18:32.540268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.630 [2024-07-25 10:18:32.540398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.630 [2024-07-25 10:18:32.540429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.630 [2024-07-25 10:18:32.540439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.630 [2024-07-25 10:18:32.540447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.630 [2024-07-25 10:18:32.540477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.630 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.550272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.550395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.550424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.550435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.550443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.550466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.560194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.560320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.560350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.560360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.560367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.560393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.570281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.570404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.570435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.570445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.570452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.570476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.580367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.580499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.580527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.580537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.580544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.580566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.590376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.590505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.590539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.590549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.590556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.590578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.600276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.600395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.600424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.600433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.600441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.600465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.610446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.610569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.610599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.610610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.610617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.610641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.620476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.620607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.620637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.620646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.620654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.620676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.630491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.630601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.630631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.630641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.630650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.630679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.640489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.640606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.640636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.640645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.640653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.640675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.650605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.650727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.650756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.650765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.650772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.650795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.660617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.660753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.660795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.660807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.660814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.660844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.670583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.670701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.631 [2024-07-25 10:18:32.670742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.631 [2024-07-25 10:18:32.670754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.631 [2024-07-25 10:18:32.670762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.631 [2024-07-25 10:18:32.670791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.631 qpair failed and we were unable to recover it. 00:29:53.631 [2024-07-25 10:18:32.680538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.631 [2024-07-25 10:18:32.680670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.680711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.680722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.680730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.680761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.690656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.690779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.690812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.690822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.690830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.690856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.700721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.700854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.700897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.700908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.700916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.700945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.710739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.710857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.710888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.710898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.710905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.710930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.720820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.720973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.721014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.721025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.721041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.721072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.730811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.730926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.730957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.730967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.730974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.730998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.740784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.740909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.740939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.740949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.740956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.740979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.750833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.750945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.750975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.750985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.750992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.751015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.632 [2024-07-25 10:18:32.760869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.632 [2024-07-25 10:18:32.760988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.632 [2024-07-25 10:18:32.761018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.632 [2024-07-25 10:18:32.761027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.632 [2024-07-25 10:18:32.761034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.632 [2024-07-25 10:18:32.761057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.632 qpair failed and we were unable to recover it. 00:29:53.893 [2024-07-25 10:18:32.770799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.770920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.770949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.770959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.770967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.770990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.780944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.781072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.781101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.781111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.781119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.781142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.790991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.791106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.791136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.791145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.791153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.791177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.800995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.801140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.801170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.801179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.801187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.801217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.811020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.811177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.811213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.811230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.811237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.811261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.821112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.821249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.821278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.821288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.821294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.821318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.831085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.831207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.831236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.831246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.831253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.831276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.841158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.841279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.841308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.841317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.841324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.841347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.851162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.851280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.851309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.851319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.851326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.851349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.861224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.861350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.861379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.861389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.861396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.861420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.871227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.871344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.871374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.871384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.871391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.871414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.881264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.881383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.881412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.881422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.881430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.881454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.891320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.894 [2024-07-25 10:18:32.891437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.894 [2024-07-25 10:18:32.891467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.894 [2024-07-25 10:18:32.891476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.894 [2024-07-25 10:18:32.891484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.894 [2024-07-25 10:18:32.891507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.894 qpair failed and we were unable to recover it. 00:29:53.894 [2024-07-25 10:18:32.901322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.901448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.901478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.901495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.901502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.901525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.911358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.911499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.911529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.911538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.911547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.911570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.921380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.921497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.921525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.921535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.921542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.921563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.931391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.931509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.931537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.931547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.931554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.931575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.941467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.941594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.941624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.941633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.941640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.941663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.951460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.951696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.951728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.951737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.951745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.951768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.961514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.961633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.961662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.961672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.961679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.961701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.971467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.971583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.971612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.971622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.971628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.971651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.981571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.981723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.981753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.981762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.981769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.981793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:32.991508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:32.991628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:32.991664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:32.991673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:32.991680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:32.991704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:33.001658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:33.001777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:33.001818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:33.001830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:33.001837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:33.001866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:33.011657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:33.011780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:33.011821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:33.011833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:33.011840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:33.011869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:53.895 [2024-07-25 10:18:33.021672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.895 [2024-07-25 10:18:33.021802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.895 [2024-07-25 10:18:33.021844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.895 [2024-07-25 10:18:33.021855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.895 [2024-07-25 10:18:33.021862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:53.895 [2024-07-25 10:18:33.021892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.895 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.031767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.031935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.031977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.031988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.031995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.032040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.041755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.041915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.041948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.041958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.041965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.041990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.051697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.051811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.051841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.051851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.051859] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.051883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.061853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.061975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.062004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.062014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.062021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.062044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.071886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.071993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.072023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.072033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.072040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.072063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.081893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.082012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.082049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.082060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.082067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.082091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.091960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.092080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.092110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.092120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.092127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.092150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.101989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.102114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.102143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.102153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.102160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.102184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.111915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.112038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.112068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.112078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.157 [2024-07-25 10:18:33.112086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.157 [2024-07-25 10:18:33.112109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.157 qpair failed and we were unable to recover it. 00:29:54.157 [2024-07-25 10:18:33.122099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.157 [2024-07-25 10:18:33.122227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.157 [2024-07-25 10:18:33.122257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.157 [2024-07-25 10:18:33.122267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.122281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.122303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.132060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.132180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.132214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.132224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.132231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.132253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.142079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.142214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.142244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.142253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.142260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.142283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.151991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.152110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.152140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.152149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.152156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.152178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.162131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.162250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.162280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.162290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.162297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.162320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.172203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.172324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.172353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.172363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.172370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.172393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.182259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.182395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.182424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.182433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.182440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.182464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.192258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.192406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.192437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.192446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.192453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.192475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.202292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.202418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.202448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.202458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.202465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.202488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.212329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.212444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.212474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.212490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.212496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.212520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.222320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.222447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.222477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.222487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.222495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.222517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.232363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.232493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.232522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.232532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.232539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.232561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.242414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.242527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.242556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.242566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.242573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.242596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.158 [2024-07-25 10:18:33.252460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.158 [2024-07-25 10:18:33.252587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.158 [2024-07-25 10:18:33.252616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.158 [2024-07-25 10:18:33.252626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.158 [2024-07-25 10:18:33.252633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.158 [2024-07-25 10:18:33.252656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.158 qpair failed and we were unable to recover it. 00:29:54.159 [2024-07-25 10:18:33.262479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.159 [2024-07-25 10:18:33.262615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.159 [2024-07-25 10:18:33.262646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.159 [2024-07-25 10:18:33.262655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.159 [2024-07-25 10:18:33.262662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.159 [2024-07-25 10:18:33.262686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.159 qpair failed and we were unable to recover it. 00:29:54.159 [2024-07-25 10:18:33.272553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.159 [2024-07-25 10:18:33.272700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.159 [2024-07-25 10:18:33.272729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.159 [2024-07-25 10:18:33.272739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.159 [2024-07-25 10:18:33.272746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.159 [2024-07-25 10:18:33.272768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.159 qpair failed and we were unable to recover it. 00:29:54.159 [2024-07-25 10:18:33.282546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.159 [2024-07-25 10:18:33.282677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.159 [2024-07-25 10:18:33.282718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.159 [2024-07-25 10:18:33.282729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.159 [2024-07-25 10:18:33.282737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.159 [2024-07-25 10:18:33.282767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.159 qpair failed and we were unable to recover it. 00:29:54.421 [2024-07-25 10:18:33.292551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.421 [2024-07-25 10:18:33.292676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.421 [2024-07-25 10:18:33.292717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.421 [2024-07-25 10:18:33.292729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.421 [2024-07-25 10:18:33.292736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.421 [2024-07-25 10:18:33.292766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.421 qpair failed and we were unable to recover it. 00:29:54.421 [2024-07-25 10:18:33.302625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.421 [2024-07-25 10:18:33.302791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.421 [2024-07-25 10:18:33.302823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.421 [2024-07-25 10:18:33.302842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.421 [2024-07-25 10:18:33.302849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.421 [2024-07-25 10:18:33.302873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.421 qpair failed and we were unable to recover it. 00:29:54.421 [2024-07-25 10:18:33.312636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.421 [2024-07-25 10:18:33.312757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.421 [2024-07-25 10:18:33.312799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.421 [2024-07-25 10:18:33.312810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.421 [2024-07-25 10:18:33.312818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.421 [2024-07-25 10:18:33.312848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.421 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.322603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.322736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.322777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.322789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.322797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.322828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.332724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.332837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.332869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.332879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.332887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.332912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.342780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.342909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.342939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.342949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.342956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.342980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.352756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.352888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.352919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.352928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.352935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.352958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.362792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.362911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.362940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.362949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.362956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.362979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.372725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.372839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.372869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.372878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.372884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.372907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.382848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.383099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.383132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.383142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.383150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.383173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.392824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.392944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.392983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.392993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.393000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.393026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.402912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.403026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.403056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.403066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.403074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.403098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.412877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.413002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.413032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.413042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.413049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.413074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.422873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.422996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.423026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.423036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.423043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.423066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.432985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.433099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.433129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.433138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.433145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.433175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.443049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.443164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.422 [2024-07-25 10:18:33.443193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.422 [2024-07-25 10:18:33.443211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.422 [2024-07-25 10:18:33.443218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.422 [2024-07-25 10:18:33.443241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.422 qpair failed and we were unable to recover it. 00:29:54.422 [2024-07-25 10:18:33.453094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.422 [2024-07-25 10:18:33.453226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.453260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.453270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.453277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.453301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.463100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.463238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.463269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.463279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.463285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.463309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.473125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.473248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.473280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.473290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.473297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.473321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.483170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.483300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.483337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.483347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.483354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.483378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.493206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.493320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.493350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.493359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.493367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.493391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.503188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.503323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.503353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.503363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.503370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.503394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.513251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.513384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.513413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.513422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.513429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.513453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.523272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.523391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.523420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.523430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.523444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.523466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.533361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.533512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.533541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.533550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.533557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.533580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.543308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.543452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.543482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.543491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.543498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.543520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.423 qpair failed and we were unable to recover it. 00:29:54.423 [2024-07-25 10:18:33.553264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.423 [2024-07-25 10:18:33.553385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.423 [2024-07-25 10:18:33.553416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.423 [2024-07-25 10:18:33.553426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.423 [2024-07-25 10:18:33.553433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.423 [2024-07-25 10:18:33.553456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.563401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.563522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.563551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.563560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.563568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.563590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.573437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.573556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.573586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.573596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.573603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.573627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.583499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.583621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.583650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.583660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.583667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.583690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.593516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.593629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.593658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.593668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.593675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.593698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.603445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.603557] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.603588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.603597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.603604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.603628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.613575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.613691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.613722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.613732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.613745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.613769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.623643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.623802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.623843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.623855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.623863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.684 [2024-07-25 10:18:33.623892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.684 qpair failed and we were unable to recover it. 00:29:54.684 [2024-07-25 10:18:33.633625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.684 [2024-07-25 10:18:33.633759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.684 [2024-07-25 10:18:33.633800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.684 [2024-07-25 10:18:33.633811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.684 [2024-07-25 10:18:33.633819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.633850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.643534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.643655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.643687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.643697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.643704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.643729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.653587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.653709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.653739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.653749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.653756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.653779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.663700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.663833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.663864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.663874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.663881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.663906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.673607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.673725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.673754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.673764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.673771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.673794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.683783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.683907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.683937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.683947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.683954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.683977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.693687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.693800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.693830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.693839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.693846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.693869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.703853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.703981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.704010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.704032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.704040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.704062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.713819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.713937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.713966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.713978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.713985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.714009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.723868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.723980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.724010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.724019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.724027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.724050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.734057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.734183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.734219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.734229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.734236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.734261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.743952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.744097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.744127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.744137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.744144] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.744167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.753991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.754105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.754134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.685 [2024-07-25 10:18:33.754143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.685 [2024-07-25 10:18:33.754150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.685 [2024-07-25 10:18:33.754173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.685 qpair failed and we were unable to recover it. 00:29:54.685 [2024-07-25 10:18:33.764008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.685 [2024-07-25 10:18:33.764122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.685 [2024-07-25 10:18:33.764151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.686 [2024-07-25 10:18:33.764160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.686 [2024-07-25 10:18:33.764167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.686 [2024-07-25 10:18:33.764190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-25 10:18:33.774082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.686 [2024-07-25 10:18:33.774193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.686 [2024-07-25 10:18:33.774230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.686 [2024-07-25 10:18:33.774242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.686 [2024-07-25 10:18:33.774249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.686 [2024-07-25 10:18:33.774274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-25 10:18:33.784019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.686 [2024-07-25 10:18:33.784153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.686 [2024-07-25 10:18:33.784182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.686 [2024-07-25 10:18:33.784191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.686 [2024-07-25 10:18:33.784198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.686 [2024-07-25 10:18:33.784233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-25 10:18:33.794142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.686 [2024-07-25 10:18:33.794261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.686 [2024-07-25 10:18:33.794298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.686 [2024-07-25 10:18:33.794308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.686 [2024-07-25 10:18:33.794315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.686 [2024-07-25 10:18:33.794339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-25 10:18:33.804122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.686 [2024-07-25 10:18:33.804250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.686 [2024-07-25 10:18:33.804280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.686 [2024-07-25 10:18:33.804290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.686 [2024-07-25 10:18:33.804297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.686 [2024-07-25 10:18:33.804320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.686 [2024-07-25 10:18:33.814211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.686 [2024-07-25 10:18:33.814332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.686 [2024-07-25 10:18:33.814362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.686 [2024-07-25 10:18:33.814372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.686 [2024-07-25 10:18:33.814379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.686 [2024-07-25 10:18:33.814402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.686 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.824253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.824386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.824415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.824425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.824433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.824456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.834260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.834369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.834398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.834408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.834415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.834445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.844233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.844385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.844414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.844422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.844429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.844452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.854312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.854478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.854507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.854518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.854526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.854550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.864243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.864374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.864404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.864413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.864420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.864444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.874289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.874426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.874453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.874463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.874470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.874492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.884396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.884534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.884569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.884579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.884586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.884609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.948 [2024-07-25 10:18:33.894425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.948 [2024-07-25 10:18:33.894555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.948 [2024-07-25 10:18:33.894584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.948 [2024-07-25 10:18:33.894593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.948 [2024-07-25 10:18:33.894601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.948 [2024-07-25 10:18:33.894624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.948 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.904476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.904601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.904630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.904639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.904646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.904669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.914516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.914648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.914678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.914688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.914695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.914718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.924553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.924796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.924824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.924834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.924849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.924872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.934595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.934733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.934774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.934785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.934792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.934824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.944662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.944822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.944863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.944874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.944881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.944911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.954625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.954742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.954784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.954795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.954803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.954833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.964641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.964759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.964802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.964813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.964821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.964851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.974842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.974978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.975010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.975019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.975026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.975051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.984686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.984814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.984845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.984855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.984863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.984888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:33.994745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:33.994868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:33.994899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:33.994908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:33.994916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:33.994939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:34.004775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:34.004891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.949 [2024-07-25 10:18:34.004920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.949 [2024-07-25 10:18:34.004930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.949 [2024-07-25 10:18:34.004937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.949 [2024-07-25 10:18:34.004963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.949 qpair failed and we were unable to recover it. 00:29:54.949 [2024-07-25 10:18:34.014773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.949 [2024-07-25 10:18:34.014892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.014923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.014932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.014947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.014971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:54.950 [2024-07-25 10:18:34.024970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.950 [2024-07-25 10:18:34.025108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.025138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.025148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.025155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.025178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:54.950 [2024-07-25 10:18:34.035016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.950 [2024-07-25 10:18:34.035149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.035178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.035187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.035194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.035224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:54.950 [2024-07-25 10:18:34.044800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.950 [2024-07-25 10:18:34.044915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.044945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.044955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.044962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.044985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:54.950 [2024-07-25 10:18:34.055025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.950 [2024-07-25 10:18:34.055150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.055180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.055189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.055196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.055227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:54.950 [2024-07-25 10:18:34.065002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.950 [2024-07-25 10:18:34.065129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.065158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.065168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.065175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.065198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:54.950 [2024-07-25 10:18:34.074959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.950 [2024-07-25 10:18:34.075077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.950 [2024-07-25 10:18:34.075107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.950 [2024-07-25 10:18:34.075117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.950 [2024-07-25 10:18:34.075124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:54.950 [2024-07-25 10:18:34.075147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:54.950 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.085148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.085287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.085317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.085327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.085334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.085358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.094962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.095079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.095108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.095118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.095125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.095148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.105059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.105212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.105242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.105259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.105266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.105290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.115003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.115118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.115148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.115157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.115164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.115187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.125168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.125299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.125328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.125338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.125345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.125368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.135179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.135306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.135336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.135345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.135353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.135377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.145176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.145366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.145396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.145405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.145412] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.145436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.155211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.155325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.155355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.155365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.155372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.155396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.165299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.165423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.165453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.213 [2024-07-25 10:18:34.165463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.213 [2024-07-25 10:18:34.165471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.213 [2024-07-25 10:18:34.165493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.213 qpair failed and we were unable to recover it. 00:29:55.213 [2024-07-25 10:18:34.175281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.213 [2024-07-25 10:18:34.175411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.213 [2024-07-25 10:18:34.175440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.175450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.175458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.175481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.185331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.185452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.185482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.185492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.185499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.185525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.195369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.195519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.195555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.195566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.195573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.195595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.205394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.205511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.205539] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.205549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.205556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.205580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.215397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.215526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.215560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.215573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.215580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.215604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.225452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.225582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.225613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.225623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.225630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.225652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.235474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.235634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.235664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.235674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.235681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.235712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.245509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.245633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.245663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.245672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.245680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.245703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.255535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.255656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.255686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.255696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.255703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.255725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.265555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.265678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.265708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.265718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.265725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.265748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.275608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.275737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.275778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.275789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.275797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.275827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.285624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.285754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.285802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.285813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.285821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.285852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.295600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.295725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.295765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.295777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.295784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.214 [2024-07-25 10:18:34.295814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.214 qpair failed and we were unable to recover it. 00:29:55.214 [2024-07-25 10:18:34.305698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.214 [2024-07-25 10:18:34.305838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.214 [2024-07-25 10:18:34.305879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.214 [2024-07-25 10:18:34.305891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.214 [2024-07-25 10:18:34.305899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.215 [2024-07-25 10:18:34.305929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.215 qpair failed and we were unable to recover it. 00:29:55.215 [2024-07-25 10:18:34.315718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.215 [2024-07-25 10:18:34.315848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.215 [2024-07-25 10:18:34.315888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.215 [2024-07-25 10:18:34.315899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.215 [2024-07-25 10:18:34.315906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.215 [2024-07-25 10:18:34.315938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.215 qpair failed and we were unable to recover it. 00:29:55.215 [2024-07-25 10:18:34.325772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.215 [2024-07-25 10:18:34.325899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.215 [2024-07-25 10:18:34.325932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.215 [2024-07-25 10:18:34.325941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.215 [2024-07-25 10:18:34.325950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.215 [2024-07-25 10:18:34.325984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.215 qpair failed and we were unable to recover it. 00:29:55.215 [2024-07-25 10:18:34.335837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.215 [2024-07-25 10:18:34.336010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.215 [2024-07-25 10:18:34.336051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.215 [2024-07-25 10:18:34.336062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.215 [2024-07-25 10:18:34.336070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.215 [2024-07-25 10:18:34.336099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.215 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.345873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.345995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.346027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.346037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.346044] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.346069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.355918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.356059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.356089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.356098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.356107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.356131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.366033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.366154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.366184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.366193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.366209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.366234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.375923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.376044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.376073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.376082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.376090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.376113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.385835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.386099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.386131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.386141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.386148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.386171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.395990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.396160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.396193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.396212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.396220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.396247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.406011] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.406133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.406164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.406174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.406183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.406217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.416048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.416162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.416193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.416210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.416226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.416252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.426117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.426255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.426287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.426296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.426304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.426328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-25 10:18:34.436076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.478 [2024-07-25 10:18:34.436191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.478 [2024-07-25 10:18:34.436230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.478 [2024-07-25 10:18:34.436241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.478 [2024-07-25 10:18:34.436248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.478 [2024-07-25 10:18:34.436272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.446109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.446229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.446259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.446268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.446276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.446300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.456168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.456293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.456323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.456334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.456341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.456365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.466197] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.466320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.466348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.466357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.466364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.466386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.476122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.476247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.476276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.476285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.476292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.476315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.486145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.486270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.486301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.486310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.486317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.486341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.496173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.496299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.496330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.496340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.496347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.496372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.506327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.506483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.506513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.506531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.506538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.506562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.516339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.516453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.516482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.516492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.516499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.516523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.526358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.526478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.526508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.526518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.526525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.526548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.536368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.536483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.536512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.536522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.536530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.536553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.546422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.546547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.546576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.546586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.546593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.546616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.556418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.556536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.556566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.556575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.556582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.556605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.566481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.566636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.479 [2024-07-25 10:18:34.566665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.479 [2024-07-25 10:18:34.566674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.479 [2024-07-25 10:18:34.566682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.479 [2024-07-25 10:18:34.566705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-25 10:18:34.576538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.479 [2024-07-25 10:18:34.576656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.480 [2024-07-25 10:18:34.576687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.480 [2024-07-25 10:18:34.576696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.480 [2024-07-25 10:18:34.576703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.480 [2024-07-25 10:18:34.576727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-25 10:18:34.586533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.480 [2024-07-25 10:18:34.586674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.480 [2024-07-25 10:18:34.586704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.480 [2024-07-25 10:18:34.586713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.480 [2024-07-25 10:18:34.586720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.480 [2024-07-25 10:18:34.586743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-25 10:18:34.596540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.480 [2024-07-25 10:18:34.596653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.480 [2024-07-25 10:18:34.596683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.480 [2024-07-25 10:18:34.596699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.480 [2024-07-25 10:18:34.596706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.480 [2024-07-25 10:18:34.596729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-25 10:18:34.606619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.480 [2024-07-25 10:18:34.606740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.480 [2024-07-25 10:18:34.606770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.480 [2024-07-25 10:18:34.606779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.480 [2024-07-25 10:18:34.606786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.480 [2024-07-25 10:18:34.606810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.616530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.616658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.616699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.616711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.616720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.616750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.626606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.626736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.626770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.626780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.626788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.626814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.636561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.636676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.636706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.636716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.636723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.636747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.646714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.646836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.646877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.646889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.646896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.646926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.656723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.656846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.656888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.656900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.656907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.656936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.666740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.666872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.666914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.666926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.666933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.666964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.676768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.676894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.676936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.676947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.676955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.676985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.743 [2024-07-25 10:18:34.686793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.743 [2024-07-25 10:18:34.686915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.743 [2024-07-25 10:18:34.686964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.743 [2024-07-25 10:18:34.686976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.743 [2024-07-25 10:18:34.686983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.743 [2024-07-25 10:18:34.687013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.743 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.696845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.696962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.696995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.697005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.697012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.697037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.706858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.706994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.707035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.707046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.707054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.707084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.716940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.717053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.717085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.717095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.717102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.717127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.727009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.727143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.727173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.727183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.727190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.727228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.736975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.737093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.737122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.737132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.737139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.737163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.747002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.747169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.747198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.747220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.747230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.747256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.757034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.757148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.757178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.757187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.757194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.757225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.767067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.767207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.767238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.767248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.767255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.767279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.776982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.777092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.777129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.777139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.777146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.777169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.787148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.787281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.787311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.787321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.787328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.787350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.797176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.797298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.797327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.797337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.797344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.797368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.807088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.807215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.807245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.807255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.807262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.807285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.817196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.744 [2024-07-25 10:18:34.817317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.744 [2024-07-25 10:18:34.817346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.744 [2024-07-25 10:18:34.817355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.744 [2024-07-25 10:18:34.817370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.744 [2024-07-25 10:18:34.817393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.744 qpair failed and we were unable to recover it. 00:29:55.744 [2024-07-25 10:18:34.827232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.745 [2024-07-25 10:18:34.827362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.745 [2024-07-25 10:18:34.827392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.745 [2024-07-25 10:18:34.827402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.745 [2024-07-25 10:18:34.827409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.745 [2024-07-25 10:18:34.827434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-07-25 10:18:34.837282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.745 [2024-07-25 10:18:34.837410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.745 [2024-07-25 10:18:34.837439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.745 [2024-07-25 10:18:34.837449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.745 [2024-07-25 10:18:34.837456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.745 [2024-07-25 10:18:34.837479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-07-25 10:18:34.847331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.745 [2024-07-25 10:18:34.847441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.745 [2024-07-25 10:18:34.847469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.745 [2024-07-25 10:18:34.847478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.745 [2024-07-25 10:18:34.847488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.745 [2024-07-25 10:18:34.847511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-07-25 10:18:34.857345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.745 [2024-07-25 10:18:34.857460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.745 [2024-07-25 10:18:34.857490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.745 [2024-07-25 10:18:34.857500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.745 [2024-07-25 10:18:34.857507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.745 [2024-07-25 10:18:34.857530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.745 qpair failed and we were unable to recover it. 00:29:55.745 [2024-07-25 10:18:34.867387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.745 [2024-07-25 10:18:34.867519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.745 [2024-07-25 10:18:34.867548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.745 [2024-07-25 10:18:34.867558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.745 [2024-07-25 10:18:34.867565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:55.745 [2024-07-25 10:18:34.867587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:55.745 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.877421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.877543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.877571] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.877581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.877588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.877611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.887446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.887562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.887593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.887603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.887610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.887633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.897465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.897608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.897636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.897645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.897653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.897674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.907541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.907661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.907689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.907706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.907714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.907737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.917412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.917526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.917555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.917566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.917573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.917595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.927577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.927685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.927714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.927724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.927731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.927754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.937596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.937719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.937761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.937772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.937780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.937809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.947618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.947756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.947797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.947808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.947816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.947845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.957639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.957768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.957809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.957820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.957827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.957856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.967697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.967819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.967850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.967860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.967868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.967892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.977798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.977944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.977973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.977981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.008 [2024-07-25 10:18:34.977988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.008 [2024-07-25 10:18:34.978010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.008 qpair failed and we were unable to recover it. 00:29:56.008 [2024-07-25 10:18:34.987757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.008 [2024-07-25 10:18:34.987906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.008 [2024-07-25 10:18:34.987947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.008 [2024-07-25 10:18:34.987957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:34.987965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:34.987995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:34.997780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:34.997920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:34.997961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:34.997979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:34.997987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:34.998018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.007861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.007973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.008003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.008015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.008023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.008048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.017864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.017990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.018031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.018042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.018049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.018079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.027892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.028023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.028056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.028066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.028073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.028098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.037835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.037941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.037970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.037980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.037988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.038011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.047844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.047956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.047985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.047995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.048003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.048025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.057962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.058076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.058104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.058115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.058122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.058144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.067903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.068071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.068100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.068109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.068116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.068138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.078027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.078149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.078179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.078189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.078196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.078226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.087957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.088082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.088120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.088130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.088137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.088163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.098148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.098264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.098295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.098305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.098312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.098335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.108118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.108241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.108270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.108282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.108289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.108311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.009 qpair failed and we were unable to recover it. 00:29:56.009 [2024-07-25 10:18:35.118163] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.009 [2024-07-25 10:18:35.118314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.009 [2024-07-25 10:18:35.118344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.009 [2024-07-25 10:18:35.118354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.009 [2024-07-25 10:18:35.118361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.009 [2024-07-25 10:18:35.118384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-25 10:18:35.128186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.010 [2024-07-25 10:18:35.128311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.010 [2024-07-25 10:18:35.128341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.010 [2024-07-25 10:18:35.128352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.010 [2024-07-25 10:18:35.128359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.010 [2024-07-25 10:18:35.128388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.010 [2024-07-25 10:18:35.138246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.010 [2024-07-25 10:18:35.138361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.010 [2024-07-25 10:18:35.138391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.010 [2024-07-25 10:18:35.138400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.010 [2024-07-25 10:18:35.138409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.010 [2024-07-25 10:18:35.138433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.010 qpair failed and we were unable to recover it. 00:29:56.272 [2024-07-25 10:18:35.148292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.272 [2024-07-25 10:18:35.148452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.272 [2024-07-25 10:18:35.148482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.272 [2024-07-25 10:18:35.148492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.272 [2024-07-25 10:18:35.148499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.272 [2024-07-25 10:18:35.148522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-07-25 10:18:35.158236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.272 [2024-07-25 10:18:35.158347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.272 [2024-07-25 10:18:35.158377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.272 [2024-07-25 10:18:35.158387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.272 [2024-07-25 10:18:35.158395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.272 [2024-07-25 10:18:35.158418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-07-25 10:18:35.168233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.272 [2024-07-25 10:18:35.168394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.272 [2024-07-25 10:18:35.168424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.272 [2024-07-25 10:18:35.168434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.272 [2024-07-25 10:18:35.168442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.272 [2024-07-25 10:18:35.168466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-07-25 10:18:35.178308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.272 [2024-07-25 10:18:35.178417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.272 [2024-07-25 10:18:35.178460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.272 [2024-07-25 10:18:35.178470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.272 [2024-07-25 10:18:35.178477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.272 [2024-07-25 10:18:35.178500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-07-25 10:18:35.188371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.272 [2024-07-25 10:18:35.188497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.272 [2024-07-25 10:18:35.188526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.272 [2024-07-25 10:18:35.188536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.272 [2024-07-25 10:18:35.188542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.272 [2024-07-25 10:18:35.188565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.272 qpair failed and we were unable to recover it. 00:29:56.272 [2024-07-25 10:18:35.198403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.198539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.198568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.198578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.198585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.198607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.208384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.208493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.208523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.208533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.208541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.208564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.218500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.218609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.218639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.218648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.218662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.218683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.228476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.228607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.228636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.228645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.228652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.228674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.238480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.238599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.238628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.238638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.238645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.238667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.248523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.248669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.248699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.248709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.248717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.248740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.258557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.258668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.258697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.258707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.258714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.258737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.268597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.268749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.268789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.268801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.268808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.268839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.278605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.278729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.278771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.278782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.278790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.278820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.288617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.288780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.288822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.288833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.288840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.288869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.298664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.298786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.298827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.298838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.298846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.298876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.308723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.308983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.309027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.309038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.309054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.309083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.318730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.318887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.273 [2024-07-25 10:18:35.318918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.273 [2024-07-25 10:18:35.318928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.273 [2024-07-25 10:18:35.318935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.273 [2024-07-25 10:18:35.318960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.273 qpair failed and we were unable to recover it. 00:29:56.273 [2024-07-25 10:18:35.328667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.273 [2024-07-25 10:18:35.328784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.328813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.328822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.328830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.328853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.338710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.338836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.338865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.338875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.338882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.338905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.348747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.348885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.348926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.348937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.348944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.348974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.358840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.358986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.359028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.359038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.359046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.359076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.368916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.369040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.369074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.369084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.369092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.369117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.378949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.379064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.379093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.379103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.379111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.379135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.388965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.389104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.389137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.389148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.389155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.389179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.274 [2024-07-25 10:18:35.398967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.274 [2024-07-25 10:18:35.399075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.274 [2024-07-25 10:18:35.399106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.274 [2024-07-25 10:18:35.399123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.274 [2024-07-25 10:18:35.399131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.274 [2024-07-25 10:18:35.399154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.274 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.408947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.409089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.409119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.409128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.409135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.409157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.418910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.419023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.419053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.419062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.419070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.419092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.429038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.429155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.429182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.429192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.429199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.429228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.439006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.439108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.439133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.439142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.439148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.439171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.449041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.449141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.449168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.449177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.449184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.449212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.459038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.459151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.459175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.459185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.459192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.459218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.469111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.469246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.469271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.469280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.469286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.469307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.479133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.479246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.479270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.479281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.479288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.479309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.489173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.489265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.489292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.489300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.489307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.489325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.499232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.499337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.499359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.499368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.499375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.499394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.509245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.536 [2024-07-25 10:18:35.509356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.536 [2024-07-25 10:18:35.509377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.536 [2024-07-25 10:18:35.509386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.536 [2024-07-25 10:18:35.509393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.536 [2024-07-25 10:18:35.509411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.536 qpair failed and we were unable to recover it. 00:29:56.536 [2024-07-25 10:18:35.519206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.519332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.519353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.519362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.519369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.519388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.529286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.529380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.529400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.529408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.529415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.529438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.539340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.539479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.539500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.539508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.539514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.539532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.549336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.549441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.549462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.549470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.549477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.549494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.559315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.559414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.559434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.559442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.559449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.559466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.569333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.569438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.569458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.569466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.569474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.569491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.579445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.579552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.579577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.579584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.579591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.579608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.589459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.589565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.589584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.589592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.589599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.589617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.599467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.599568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.599589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.599597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.599604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.599620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.609512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.609604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.609624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.609632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.609638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.609655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.619649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.619751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.619770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.619778] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.619789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.619806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.629569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.629672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.629690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.629697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.537 [2024-07-25 10:18:35.629705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.537 [2024-07-25 10:18:35.629722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.537 qpair failed and we were unable to recover it. 00:29:56.537 [2024-07-25 10:18:35.639586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.537 [2024-07-25 10:18:35.639692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.537 [2024-07-25 10:18:35.639710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.537 [2024-07-25 10:18:35.639718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.538 [2024-07-25 10:18:35.639724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.538 [2024-07-25 10:18:35.639741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.538 qpair failed and we were unable to recover it. 00:29:56.538 [2024-07-25 10:18:35.649595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.538 [2024-07-25 10:18:35.649695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.538 [2024-07-25 10:18:35.649722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.538 [2024-07-25 10:18:35.649732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.538 [2024-07-25 10:18:35.649739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.538 [2024-07-25 10:18:35.649761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.538 qpair failed and we were unable to recover it. 00:29:56.538 [2024-07-25 10:18:35.659545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.538 [2024-07-25 10:18:35.659646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.538 [2024-07-25 10:18:35.659665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.538 [2024-07-25 10:18:35.659673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.538 [2024-07-25 10:18:35.659681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.538 [2024-07-25 10:18:35.659698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.538 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-25 10:18:35.669611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.798 [2024-07-25 10:18:35.669726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.798 [2024-07-25 10:18:35.669745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.798 [2024-07-25 10:18:35.669753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.798 [2024-07-25 10:18:35.669760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.798 [2024-07-25 10:18:35.669780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-25 10:18:35.679673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.798 [2024-07-25 10:18:35.679769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.798 [2024-07-25 10:18:35.679788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.798 [2024-07-25 10:18:35.679796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.798 [2024-07-25 10:18:35.679803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.798 [2024-07-25 10:18:35.679820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-25 10:18:35.689696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.798 [2024-07-25 10:18:35.689794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.798 [2024-07-25 10:18:35.689820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.798 [2024-07-25 10:18:35.689830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.798 [2024-07-25 10:18:35.689837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.798 [2024-07-25 10:18:35.689858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-25 10:18:35.699781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.798 [2024-07-25 10:18:35.699884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.798 [2024-07-25 10:18:35.699903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.798 [2024-07-25 10:18:35.699911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.798 [2024-07-25 10:18:35.699918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.798 [2024-07-25 10:18:35.699936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-25 10:18:35.709797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.798 [2024-07-25 10:18:35.709913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.798 [2024-07-25 10:18:35.709939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.798 [2024-07-25 10:18:35.709948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.798 [2024-07-25 10:18:35.709959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.798 [2024-07-25 10:18:35.709980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.798 qpair failed and we were unable to recover it. 00:29:56.798 [2024-07-25 10:18:35.719779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.798 [2024-07-25 10:18:35.719890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.798 [2024-07-25 10:18:35.719916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.719925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.719933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.719954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.729775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.729881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.729907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.729916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.729922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.729944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.739847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.739948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.739967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.739975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.739982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.739999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.749877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.749978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.749996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.750005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.750011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.750028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.759894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.759990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.760008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.760015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.760022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.760038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.769931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.770038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.770064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.770073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.770080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.770101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.780017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.780128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.780147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.780156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.780162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.780179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.790025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.790130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.790149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.790157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.790163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.790180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.800012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.800102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.800120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.800136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.800142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.800159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.810017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.810114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.810131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.810139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.810146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.810162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.820149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.820255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.820272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.820280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.820286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.820303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.830113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.830220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.830237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.830245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.830251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.830269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.799 [2024-07-25 10:18:35.839991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.799 [2024-07-25 10:18:35.840084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.799 [2024-07-25 10:18:35.840100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.799 [2024-07-25 10:18:35.840108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.799 [2024-07-25 10:18:35.840114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.799 [2024-07-25 10:18:35.840130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.799 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.850159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.850282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.850299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.850307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.850314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.850330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.860135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.860240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.860257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.860265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.860272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.860288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.870213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.870317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.870335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.870343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.870349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.870364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.880239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.880331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.880348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.880356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.880362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.880378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.890274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.890367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.890388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.890395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.890403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.890419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.900319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.900419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.900436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.900444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.900451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.900466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.910336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.910443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.910460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.910468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.910474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.910490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.920333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.800 [2024-07-25 10:18:35.920425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.800 [2024-07-25 10:18:35.920443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.800 [2024-07-25 10:18:35.920450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.800 [2024-07-25 10:18:35.920457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:56.800 [2024-07-25 10:18:35.920472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:56.800 qpair failed and we were unable to recover it. 00:29:56.800 [2024-07-25 10:18:35.930335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.060 [2024-07-25 10:18:35.930457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.060 [2024-07-25 10:18:35.930475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.060 [2024-07-25 10:18:35.930485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.060 [2024-07-25 10:18:35.930492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.060 [2024-07-25 10:18:35.930512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.060 qpair failed and we were unable to recover it. 00:29:57.060 [2024-07-25 10:18:35.940425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.060 [2024-07-25 10:18:35.940525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.060 [2024-07-25 10:18:35.940543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.060 [2024-07-25 10:18:35.940551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:35.940557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:35.940574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:35.950464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:35.950569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:35.950588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:35.950595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:35.950602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:35.950617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:35.960404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:35.960504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:35.960522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:35.960529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:35.960536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:35.960551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:35.970455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:35.970547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:35.970565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:35.970572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:35.970579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:35.970595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:35.980529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:35.980631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:35.980653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:35.980660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:35.980667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:35.980683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:35.990579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:35.990685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:35.990702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:35.990710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:35.990716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:35.990732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.000584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.000683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.000701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.000709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.000716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:36.000731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.010570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.010665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.010682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.010690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.010696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:36.010713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.020662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.020762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.020780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.020787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.020794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:36.020813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.030688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.030800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.030825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.030835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.030842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:36.030862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.040684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.040789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.040815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.040824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.040831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:36.040852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.050687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.050790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.050816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.050826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.050832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.061 [2024-07-25 10:18:36.050854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.061 qpair failed and we were unable to recover it. 00:29:57.061 [2024-07-25 10:18:36.060784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.061 [2024-07-25 10:18:36.060891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.061 [2024-07-25 10:18:36.060918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.061 [2024-07-25 10:18:36.060928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.061 [2024-07-25 10:18:36.060934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.060956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.070790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.070903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.070929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.070938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.070946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.070967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.080665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.080767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.080793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.080802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.080809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.080830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.090817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.090913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.090933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.090941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.090948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.090965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.100897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.100998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.101016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.101024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.101030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.101046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.110786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.110895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.110913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.110921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.110932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.110948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.120835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.120940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.120957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.120965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.120972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.120987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.130933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.131033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.131050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.131057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.131064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.131080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.140986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.141087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.141105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.141113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.141120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.141136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.151009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.151114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.151132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.151140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.151146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.151163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.160980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.161077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.161094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.062 [2024-07-25 10:18:36.161101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.062 [2024-07-25 10:18:36.161109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.062 [2024-07-25 10:18:36.161124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.062 qpair failed and we were unable to recover it. 00:29:57.062 [2024-07-25 10:18:36.171036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.062 [2024-07-25 10:18:36.171133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.062 [2024-07-25 10:18:36.171150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.063 [2024-07-25 10:18:36.171158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.063 [2024-07-25 10:18:36.171164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.063 [2024-07-25 10:18:36.171180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-07-25 10:18:36.181073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.063 [2024-07-25 10:18:36.181171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.063 [2024-07-25 10:18:36.181189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.063 [2024-07-25 10:18:36.181196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.063 [2024-07-25 10:18:36.181208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.063 [2024-07-25 10:18:36.181225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.063 [2024-07-25 10:18:36.191137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.063 [2024-07-25 10:18:36.191248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.063 [2024-07-25 10:18:36.191265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.063 [2024-07-25 10:18:36.191272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.063 [2024-07-25 10:18:36.191278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.063 [2024-07-25 10:18:36.191294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.063 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.201108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.201196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.201218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.201230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.201237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.201254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.211199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.211304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.211321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.211329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.211335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.211352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.221132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.221272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.221289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.221296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.221303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.221319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.231247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.231349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.231365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.231374] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.231381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.231397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.241220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.241318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.241335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.241343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.241350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.241366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.251257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.251355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.251372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.251380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.251386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.251402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.261384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.261485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.324 [2024-07-25 10:18:36.261502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.324 [2024-07-25 10:18:36.261510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.324 [2024-07-25 10:18:36.261516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.324 [2024-07-25 10:18:36.261533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.324 qpair failed and we were unable to recover it. 00:29:57.324 [2024-07-25 10:18:36.271346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.324 [2024-07-25 10:18:36.271447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.271464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.271472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.271478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.271495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.281435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.281529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.281546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.281554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.281560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.281576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.291409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.291521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.291542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.291550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.291556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.291572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.301430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.301528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.301545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.301554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.301560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.301576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.311414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.311521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.311538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.311546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.311552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.311568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.321443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.321541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.321559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.321567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.321573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.321589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.331483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.331578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.331596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.331604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.331610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.331626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.341543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.341643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.341660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.341669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.341675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.341690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.351546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.351645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.351663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.351670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.351677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.351692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.361552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.361648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.361665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.361673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.361679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.361695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.371531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.371607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.371624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.371631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.371639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.371654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.381683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.381788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.381810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.381817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.381824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.381841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.391654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.391753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.391771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.325 [2024-07-25 10:18:36.391779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.325 [2024-07-25 10:18:36.391786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.325 [2024-07-25 10:18:36.391802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.325 qpair failed and we were unable to recover it. 00:29:57.325 [2024-07-25 10:18:36.401673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.325 [2024-07-25 10:18:36.401756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.325 [2024-07-25 10:18:36.401773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.326 [2024-07-25 10:18:36.401782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.326 [2024-07-25 10:18:36.401788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.326 [2024-07-25 10:18:36.401804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-25 10:18:36.411676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.326 [2024-07-25 10:18:36.411783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.326 [2024-07-25 10:18:36.411800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.326 [2024-07-25 10:18:36.411808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.326 [2024-07-25 10:18:36.411814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.326 [2024-07-25 10:18:36.411830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-25 10:18:36.421774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.326 [2024-07-25 10:18:36.421875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.326 [2024-07-25 10:18:36.421893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.326 [2024-07-25 10:18:36.421900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.326 [2024-07-25 10:18:36.421907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.326 [2024-07-25 10:18:36.421927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-25 10:18:36.431753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.326 [2024-07-25 10:18:36.431890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.326 [2024-07-25 10:18:36.431907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.326 [2024-07-25 10:18:36.431914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.326 [2024-07-25 10:18:36.431920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.326 [2024-07-25 10:18:36.431937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-25 10:18:36.441778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.326 [2024-07-25 10:18:36.441874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.326 [2024-07-25 10:18:36.441891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.326 [2024-07-25 10:18:36.441898] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.326 [2024-07-25 10:18:36.441905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.326 [2024-07-25 10:18:36.441920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.326 [2024-07-25 10:18:36.451733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.326 [2024-07-25 10:18:36.451829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.326 [2024-07-25 10:18:36.451846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.326 [2024-07-25 10:18:36.451854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.326 [2024-07-25 10:18:36.451860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.326 [2024-07-25 10:18:36.451876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.326 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.461875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.461976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.461992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.462000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.462008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.462023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.471817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.471917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.471938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.471945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.471952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.471968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.481880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.481976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.481993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.482000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.482007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.482023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.491914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.492055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.492073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.492080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.492087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.492102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.501857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.501957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.501976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.501984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.501990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.502007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.511949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.512090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.512108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.512115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.512125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.512142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.522113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.522213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.522231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.522238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.522245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.522261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.587 qpair failed and we were unable to recover it. 00:29:57.587 [2024-07-25 10:18:36.532027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.587 [2024-07-25 10:18:36.532121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.587 [2024-07-25 10:18:36.532138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.587 [2024-07-25 10:18:36.532145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.587 [2024-07-25 10:18:36.532152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.587 [2024-07-25 10:18:36.532168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.541959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.542069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.542086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.542094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.542100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.542115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.552076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.552177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.552195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.552208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.552215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.552230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.562063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.562160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.562176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.562185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.562191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.562211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.572037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.572121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.572139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.572147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.572153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.572169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.582189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.582290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.582307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.582315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.582322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.582338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.592126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.592231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.592249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.592257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.592263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.592279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.602158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.602260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.602278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.602293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.602299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.602315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.612238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.612330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.612348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.612355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.612362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.612377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.622273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.622383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.622400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.622408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.622414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.622431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.632193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.632297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.632314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.632322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.632329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.632345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.642320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.642417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.642434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.642442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.642449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.588 [2024-07-25 10:18:36.642465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.588 qpair failed and we were unable to recover it. 00:29:57.588 [2024-07-25 10:18:36.652388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.588 [2024-07-25 10:18:36.652483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.588 [2024-07-25 10:18:36.652500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.588 [2024-07-25 10:18:36.652508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.588 [2024-07-25 10:18:36.652515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.652532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-07-25 10:18:36.662392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.589 [2024-07-25 10:18:36.662502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.589 [2024-07-25 10:18:36.662519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.589 [2024-07-25 10:18:36.662526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.589 [2024-07-25 10:18:36.662533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.662549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-07-25 10:18:36.672415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.589 [2024-07-25 10:18:36.672517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.589 [2024-07-25 10:18:36.672534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.589 [2024-07-25 10:18:36.672542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.589 [2024-07-25 10:18:36.672548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.672565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-07-25 10:18:36.682404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.589 [2024-07-25 10:18:36.682501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.589 [2024-07-25 10:18:36.682518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.589 [2024-07-25 10:18:36.682526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.589 [2024-07-25 10:18:36.682533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.682548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-07-25 10:18:36.692451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.589 [2024-07-25 10:18:36.692542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.589 [2024-07-25 10:18:36.692559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.589 [2024-07-25 10:18:36.692571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.589 [2024-07-25 10:18:36.692577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.692593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-07-25 10:18:36.702532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.589 [2024-07-25 10:18:36.702632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.589 [2024-07-25 10:18:36.702650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.589 [2024-07-25 10:18:36.702658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.589 [2024-07-25 10:18:36.702664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.702680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.589 [2024-07-25 10:18:36.712543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.589 [2024-07-25 10:18:36.712666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.589 [2024-07-25 10:18:36.712683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.589 [2024-07-25 10:18:36.712691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.589 [2024-07-25 10:18:36.712698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.589 [2024-07-25 10:18:36.712714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.589 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.722520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.722664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.722681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.850 [2024-07-25 10:18:36.722689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.850 [2024-07-25 10:18:36.722696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.850 [2024-07-25 10:18:36.722711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.850 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.732548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.732657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.732674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.850 [2024-07-25 10:18:36.732682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.850 [2024-07-25 10:18:36.732688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.850 [2024-07-25 10:18:36.732704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.850 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.742598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.742702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.742719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.850 [2024-07-25 10:18:36.742727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.850 [2024-07-25 10:18:36.742734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.850 [2024-07-25 10:18:36.742749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.850 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.752623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.752724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.752742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.850 [2024-07-25 10:18:36.752750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.850 [2024-07-25 10:18:36.752757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.850 [2024-07-25 10:18:36.752776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.850 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.762649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.762761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.762778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.850 [2024-07-25 10:18:36.762786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.850 [2024-07-25 10:18:36.762793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.850 [2024-07-25 10:18:36.762810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.850 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.772674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.772894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.772922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.850 [2024-07-25 10:18:36.772931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.850 [2024-07-25 10:18:36.772938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.850 [2024-07-25 10:18:36.772959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.850 qpair failed and we were unable to recover it. 00:29:57.850 [2024-07-25 10:18:36.782763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.850 [2024-07-25 10:18:36.782884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.850 [2024-07-25 10:18:36.782915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.782925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.782932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.782954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.792698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.792802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.792827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.792837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.792843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.792864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.802754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.802857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.802883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.802892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.802899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.802920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.812664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.812768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.812788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.812796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.812802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.812819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.822863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.822964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.822981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.822990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.822996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.823017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.832801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.832903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.832921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.832929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.832935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.832951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.842819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.842912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.842930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.842937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.842943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.842959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.852891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.852996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.853022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.853031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.853038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.853059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.862972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.863075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.863094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.863102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.863108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.863125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.872950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.873050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.873072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.873081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.873087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.873103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.882842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.882937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.882955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.882963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.882969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.882985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.892929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.893036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.893053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.893061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.893068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.893084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.903081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.903181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.851 [2024-07-25 10:18:36.903199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.851 [2024-07-25 10:18:36.903211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.851 [2024-07-25 10:18:36.903218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.851 [2024-07-25 10:18:36.903235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.851 qpair failed and we were unable to recover it. 00:29:57.851 [2024-07-25 10:18:36.913105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.851 [2024-07-25 10:18:36.913226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.913243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.913252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.913262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.913279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:57.852 [2024-07-25 10:18:36.923073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.852 [2024-07-25 10:18:36.923168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.923186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.923193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.923206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.923222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:57.852 [2024-07-25 10:18:36.933076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.852 [2024-07-25 10:18:36.933166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.933183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.933190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.933197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.933216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:57.852 [2024-07-25 10:18:36.943186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.852 [2024-07-25 10:18:36.943292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.943309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.943317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.943323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.943340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:57.852 [2024-07-25 10:18:36.953154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.852 [2024-07-25 10:18:36.953258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.953275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.953283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.953290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.953306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:57.852 [2024-07-25 10:18:36.963194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.852 [2024-07-25 10:18:36.963295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.963313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.963320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.963327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.963343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:57.852 [2024-07-25 10:18:36.973231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.852 [2024-07-25 10:18:36.973353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.852 [2024-07-25 10:18:36.973370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.852 [2024-07-25 10:18:36.973378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.852 [2024-07-25 10:18:36.973384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:57.852 [2024-07-25 10:18:36.973400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.852 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:36.983284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:36.983393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:36.983410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:36.983418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:36.983424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:36.983441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:36.993297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:36.993402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:36.993419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:36.993427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:36.993433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:36.993448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.003292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.003387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.003406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.003414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.003425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.003441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.013332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.013428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.013445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.013453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.013459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.013475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.023407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.023508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.023526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.023534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.023540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.023556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.033392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.033494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.033511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.033519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.033525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.033540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.043321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.043420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.043437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.043445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.043452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.043468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.053528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.053747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.053765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.053772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.053779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.053794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.063527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.063629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.063646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.063653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.063660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.063676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.073498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.073596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.073613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.073621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.073627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.073643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.083519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.083612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.083629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.083637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.083643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.083658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.093616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.148 [2024-07-25 10:18:37.093740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.148 [2024-07-25 10:18:37.093757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.148 [2024-07-25 10:18:37.093768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.148 [2024-07-25 10:18:37.093774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.148 [2024-07-25 10:18:37.093790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.148 qpair failed and we were unable to recover it. 00:29:58.148 [2024-07-25 10:18:37.103628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.103728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.103746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.103754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.103760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.103776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.113612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.113723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.113749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.113758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.113765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.113786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.123635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.123746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.123772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.123781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.123788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.123809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.133658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.133757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.133783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.133792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.133800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.133821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.143752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.143857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.143882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.143891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.143898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.143920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.153721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.153828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.153854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.153863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.153870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.153891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.163730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.163824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.163843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.163852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.163858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.163876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.173782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.173879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.173895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.173903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.173910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.173926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.183725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.183826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.183848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.183856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.183863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.183879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.193838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.193939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.193956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.193964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.193970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.193987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.203852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.203945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.203962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.203969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.203976] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.203991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.213964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.214088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.214105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.214113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.214119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.214135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.223952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.224053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.224071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.149 [2024-07-25 10:18:37.224079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.149 [2024-07-25 10:18:37.224085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.149 [2024-07-25 10:18:37.224105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.149 qpair failed and we were unable to recover it. 00:29:58.149 [2024-07-25 10:18:37.233918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.149 [2024-07-25 10:18:37.234023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.149 [2024-07-25 10:18:37.234041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.150 [2024-07-25 10:18:37.234048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.150 [2024-07-25 10:18:37.234055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.150 [2024-07-25 10:18:37.234070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.150 qpair failed and we were unable to recover it. 00:29:58.150 [2024-07-25 10:18:37.243974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.150 [2024-07-25 10:18:37.244069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.150 [2024-07-25 10:18:37.244086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.150 [2024-07-25 10:18:37.244094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.150 [2024-07-25 10:18:37.244101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.150 [2024-07-25 10:18:37.244117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.150 qpair failed and we were unable to recover it. 00:29:58.150 [2024-07-25 10:18:37.253958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.150 [2024-07-25 10:18:37.254050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.150 [2024-07-25 10:18:37.254067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.150 [2024-07-25 10:18:37.254075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.150 [2024-07-25 10:18:37.254082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.150 [2024-07-25 10:18:37.254097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.150 qpair failed and we were unable to recover it. 00:29:58.150 [2024-07-25 10:18:37.264056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.150 [2024-07-25 10:18:37.264154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.150 [2024-07-25 10:18:37.264171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.150 [2024-07-25 10:18:37.264179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.150 [2024-07-25 10:18:37.264186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.150 [2024-07-25 10:18:37.264232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.150 qpair failed and we were unable to recover it. 00:29:58.150 [2024-07-25 10:18:37.274055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.150 [2024-07-25 10:18:37.274154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.150 [2024-07-25 10:18:37.274174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.150 [2024-07-25 10:18:37.274182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.150 [2024-07-25 10:18:37.274188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.150 [2024-07-25 10:18:37.274209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.150 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.283961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.284056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.284073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.284081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.284088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.284104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.294070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.294163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.294181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.294189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.294195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.294218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.304166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.304266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.304284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.304291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.304298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.304314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.314143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.314242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.314261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.314269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.314283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.314300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.324193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.324296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.324314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.324322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.324329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.324345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.334115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.334220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.334237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.334244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.334251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.334267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.344350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.344496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.344513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.344521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.344528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.412 [2024-07-25 10:18:37.344544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.412 qpair failed and we were unable to recover it. 00:29:58.412 [2024-07-25 10:18:37.354264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.412 [2024-07-25 10:18:37.354398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.412 [2024-07-25 10:18:37.354415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.412 [2024-07-25 10:18:37.354423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.412 [2024-07-25 10:18:37.354429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.354446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.364292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.364389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.364406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.364414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.364421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.364437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.374315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.374409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.374426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.374434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.374440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.374456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.384388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.384489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.384506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.384513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.384520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.384535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.394399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.394498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.394515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.394523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.394530] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.394546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.404316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.404410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.404427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.404434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.404444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.404461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.414560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.414661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.414678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.414685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.414692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.414708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.424373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.424482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.424499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.424508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.424514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.424530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.434485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.434571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.434587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.434596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.434602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.434617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.444536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.444632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.444650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.444658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.444664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.444682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.454490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.454584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.454601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.454609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.454616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.454632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.464599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.464696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.464713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.464721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.464727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.464743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.474485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.474582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.474599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.474606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.474613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.474628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.484635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.413 [2024-07-25 10:18:37.484733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.413 [2024-07-25 10:18:37.484750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.413 [2024-07-25 10:18:37.484758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.413 [2024-07-25 10:18:37.484764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.413 [2024-07-25 10:18:37.484781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.413 qpair failed and we were unable to recover it. 00:29:58.413 [2024-07-25 10:18:37.494656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.414 [2024-07-25 10:18:37.494753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.414 [2024-07-25 10:18:37.494770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.414 [2024-07-25 10:18:37.494782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.414 [2024-07-25 10:18:37.494788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.414 [2024-07-25 10:18:37.494804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-25 10:18:37.504723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.414 [2024-07-25 10:18:37.504819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.414 [2024-07-25 10:18:37.504836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.414 [2024-07-25 10:18:37.504844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.414 [2024-07-25 10:18:37.504851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.414 [2024-07-25 10:18:37.504866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-25 10:18:37.514717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.414 [2024-07-25 10:18:37.514822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.414 [2024-07-25 10:18:37.514840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.414 [2024-07-25 10:18:37.514848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.414 [2024-07-25 10:18:37.514855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.414 [2024-07-25 10:18:37.514876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-25 10:18:37.524706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.414 [2024-07-25 10:18:37.524805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.414 [2024-07-25 10:18:37.524823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.414 [2024-07-25 10:18:37.524831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.414 [2024-07-25 10:18:37.524837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.414 [2024-07-25 10:18:37.524853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.414 [2024-07-25 10:18:37.534733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.414 [2024-07-25 10:18:37.534826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.414 [2024-07-25 10:18:37.534844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.414 [2024-07-25 10:18:37.534851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.414 [2024-07-25 10:18:37.534858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.414 [2024-07-25 10:18:37.534874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.414 qpair failed and we were unable to recover it. 00:29:58.676 [2024-07-25 10:18:37.544749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.676 [2024-07-25 10:18:37.544850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.676 [2024-07-25 10:18:37.544868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.676 [2024-07-25 10:18:37.544876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.676 [2024-07-25 10:18:37.544882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.676 [2024-07-25 10:18:37.544899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.676 qpair failed and we were unable to recover it. 00:29:58.676 [2024-07-25 10:18:37.554869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.554966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.554983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.554990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.554997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.555013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.564843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.564978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.564997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.565005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.565011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.565028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.574732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.574828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.574846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.574853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.574860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.574876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.584896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.584983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.585003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.585011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.585017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.585033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.594931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.595075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.595092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.595100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.595107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.595124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.604964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.605059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.605076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.605083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.605090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.605106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.615044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.615146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.615163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.615171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.615177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.615193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.625037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.625139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.625156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.625164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.625170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.625190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.634945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.635043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.635060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.635068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.635074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.635090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.645056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.645155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.645172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.645180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.645187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faa98000b90 00:29:58.677 [2024-07-25 10:18:37.645207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.655032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.655128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.655148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.655155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.655161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faaa0000b90 00:29:58.677 [2024-07-25 10:18:37.655176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.665142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.677 [2024-07-25 10:18:37.665227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.677 [2024-07-25 10:18:37.665242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.677 [2024-07-25 10:18:37.665248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.677 [2024-07-25 10:18:37.665253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faaa0000b90 00:29:58.677 [2024-07-25 10:18:37.665267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.677 qpair failed and we were unable to recover it. 00:29:58.677 [2024-07-25 10:18:37.665644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b16f20 is same with the state(5) to be set 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Read completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Write completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.677 Write completed with error (sct=0, sc=8) 00:29:58.677 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 [2024-07-25 10:18:37.666748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Read completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 Write completed with error (sct=0, sc=8) 00:29:58.678 starting I/O failed 00:29:58.678 [2024-07-25 10:18:37.666958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.678 [2024-07-25 10:18:37.675275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.678 [2024-07-25 10:18:37.675520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.678 [2024-07-25 10:18:37.675588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.678 [2024-07-25 10:18:37.675614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.678 [2024-07-25 10:18:37.675634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faaa8000b90 00:29:58.678 [2024-07-25 10:18:37.675690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.678 qpair failed and we were unable to recover it. 00:29:58.678 [2024-07-25 10:18:37.685178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.678 [2024-07-25 10:18:37.685358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.678 [2024-07-25 10:18:37.685393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.678 [2024-07-25 10:18:37.685409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.678 [2024-07-25 10:18:37.685423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7faaa8000b90 00:29:58.678 [2024-07-25 10:18:37.685457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:58.678 qpair failed and we were unable to recover it. 00:29:58.678 [2024-07-25 10:18:37.695183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.678 [2024-07-25 10:18:37.695293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.678 [2024-07-25 10:18:37.695320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.678 [2024-07-25 10:18:37.695329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.678 [2024-07-25 10:18:37.695336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b09220 00:29:58.678 [2024-07-25 10:18:37.695356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.678 qpair failed and we were unable to recover it. 00:29:58.678 [2024-07-25 10:18:37.705227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.678 [2024-07-25 10:18:37.705339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.678 [2024-07-25 10:18:37.705366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.678 [2024-07-25 10:18:37.705375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.678 [2024-07-25 10:18:37.705382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b09220 00:29:58.678 [2024-07-25 10:18:37.705402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.678 qpair failed and we were unable to recover it. 00:29:58.678 [2024-07-25 10:18:37.705697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b16f20 (9): Bad file descriptor 00:29:58.678 Initializing NVMe Controllers 00:29:58.678 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.678 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:58.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:58.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:58.678 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:58.678 Initialization complete. Launching workers. 00:29:58.678 Starting thread on core 1 00:29:58.678 Starting thread on core 2 00:29:58.678 Starting thread on core 3 00:29:58.678 Starting thread on core 0 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:58.678 00:29:58.678 real 0m11.427s 00:29:58.678 user 0m20.060s 00:29:58.678 sys 0m4.380s 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.678 ************************************ 00:29:58.678 END TEST nvmf_target_disconnect_tc2 00:29:58.678 ************************************ 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:58.678 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:29:58.679 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:58.679 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:58.679 rmmod nvme_tcp 00:29:58.679 rmmod nvme_fabrics 00:29:58.679 rmmod nvme_keyring 00:29:58.939 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:58.939 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:29:58.939 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:29:58.939 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1481407 ']' 00:29:58.939 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1481407 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1481407 ']' 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 1481407 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1481407 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1481407' 00:29:58.940 killing process with pid 1481407 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 1481407 00:29:58.940 10:18:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 1481407 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.940 10:18:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.485 10:18:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:01.485 00:30:01.485 real 0m21.280s 00:30:01.485 user 0m48.170s 00:30:01.485 sys 0m10.039s 00:30:01.485 10:18:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.485 10:18:40 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:01.485 ************************************ 00:30:01.485 END TEST nvmf_target_disconnect 00:30:01.485 ************************************ 00:30:01.485 10:18:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:01.485 00:30:01.485 real 6m18.061s 00:30:01.485 user 11m8.291s 00:30:01.485 sys 2m5.041s 00:30:01.485 10:18:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.485 10:18:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.485 ************************************ 00:30:01.485 END TEST nvmf_host 00:30:01.485 ************************************ 00:30:01.485 00:30:01.485 real 22m45.820s 00:30:01.485 user 47m24.442s 00:30:01.485 sys 7m11.588s 00:30:01.485 10:18:40 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:01.485 10:18:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.485 ************************************ 00:30:01.485 END TEST nvmf_tcp 00:30:01.485 ************************************ 00:30:01.485 10:18:40 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:30:01.485 10:18:40 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:01.485 10:18:40 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:01.485 10:18:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:01.485 10:18:40 -- common/autotest_common.sh@10 -- # set +x 00:30:01.485 ************************************ 00:30:01.485 START TEST spdkcli_nvmf_tcp 00:30:01.485 ************************************ 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:01.485 * Looking for test storage... 00:30:01.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1483235 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1483235 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 1483235 ']' 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:01.485 10:18:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.485 [2024-07-25 10:18:40.459182] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:01.486 [2024-07-25 10:18:40.459257] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1483235 ] 00:30:01.486 EAL: No free 2048 kB hugepages reported on node 1 00:30:01.486 [2024-07-25 10:18:40.522410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:01.486 [2024-07-25 10:18:40.598661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.486 [2024-07-25 10:18:40.598664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.428 10:18:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:02.428 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:02.428 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:02.428 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:02.428 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:02.428 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:02.428 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:02.428 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:02.428 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:02.428 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:02.428 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:02.428 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:02.428 ' 00:30:04.975 [2024-07-25 10:18:43.583757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.920 [2024-07-25 10:18:44.751577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:07.835 [2024-07-25 10:18:46.889762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:09.750 [2024-07-25 10:18:48.727142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:11.283 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:11.283 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:11.283 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:11.283 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:11.283 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:11.283 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:11.283 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:11.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:11.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:11.284 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:11.284 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:11.284 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:11.284 10:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:11.545 10:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.807 10:18:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:11.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:11.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:11.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:11.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:11.807 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:11.807 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:11.807 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:11.807 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:11.807 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:11.807 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:11.807 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:11.807 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:11.807 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:11.807 ' 00:30:17.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:17.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:17.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:17.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:17.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:17.098 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:17.098 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:17.098 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:17.098 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:17.098 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:17.098 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:17.098 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:17.098 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:17.098 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1483235 ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1483235' 00:30:17.098 killing process with pid 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1483235 ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1483235 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 1483235 ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 1483235 00:30:17.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1483235) - No such process 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 1483235 is not found' 00:30:17.098 Process with pid 1483235 is not found 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:17.098 00:30:17.098 real 0m15.587s 00:30:17.098 user 0m32.099s 00:30:17.098 sys 0m0.738s 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:17.098 10:18:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:17.098 ************************************ 00:30:17.098 END TEST spdkcli_nvmf_tcp 00:30:17.098 ************************************ 00:30:17.098 10:18:55 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:17.098 10:18:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:30:17.098 10:18:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:17.098 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:30:17.098 ************************************ 00:30:17.098 START TEST nvmf_identify_passthru 00:30:17.098 ************************************ 00:30:17.098 10:18:55 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:17.098 * Looking for test storage... 00:30:17.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.098 10:18:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.098 10:18:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.099 10:18:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.099 10:18:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.099 10:18:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:17.099 10:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.099 10:18:56 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.099 10:18:56 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.099 10:18:56 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:17.099 10:18:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.099 10:18:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:17.099 10:18:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:17.099 10:18:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:17.099 10:18:56 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:17.099 10:18:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:23.694 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:23.695 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:23.695 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:23.695 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:23.695 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:23.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:30:23.695 00:30:23.695 --- 10.0.0.2 ping statistics --- 00:30:23.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.695 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.373 ms 00:30:23.695 00:30:23.695 --- 10.0.0.1 ping statistics --- 00:30:23.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.695 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:23.695 10:19:02 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:23.957 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:23.957 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:23.957 10:19:02 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:23.957 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:23.957 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:23.957 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:23.957 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:23.958 10:19:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:23.958 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.530 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:24.530 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:24.530 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:24.530 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:24.530 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.790 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:24.790 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:24.790 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:24.790 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.051 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.051 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1489980 00:30:25.051 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:25.051 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:25.051 10:19:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1489980 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 1489980 ']' 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:25.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:25.051 10:19:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.051 [2024-07-25 10:19:03.998556] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:25.051 [2024-07-25 10:19:03.998610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:25.051 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.051 [2024-07-25 10:19:04.065818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:25.051 [2024-07-25 10:19:04.138711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.051 [2024-07-25 10:19:04.138751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.051 [2024-07-25 10:19:04.138758] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.051 [2024-07-25 10:19:04.138765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.051 [2024-07-25 10:19:04.138770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.051 [2024-07-25 10:19:04.138909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.051 [2024-07-25 10:19:04.139029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:25.051 [2024-07-25 10:19:04.139185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.051 [2024-07-25 10:19:04.139186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:25.995 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:25.995 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:30:25.995 10:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:25.995 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.995 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.995 INFO: Log level set to 20 00:30:25.995 INFO: Requests: 00:30:25.995 { 00:30:25.995 "jsonrpc": "2.0", 00:30:25.995 "method": "nvmf_set_config", 00:30:25.995 "id": 1, 00:30:25.995 "params": { 00:30:25.995 "admin_cmd_passthru": { 00:30:25.995 "identify_ctrlr": true 00:30:25.995 } 00:30:25.995 } 00:30:25.995 } 00:30:25.995 00:30:25.995 INFO: response: 00:30:25.995 { 00:30:25.995 "jsonrpc": "2.0", 00:30:25.995 "id": 1, 00:30:25.996 "result": true 00:30:25.996 } 00:30:25.996 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.996 10:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.996 INFO: Setting log level to 20 00:30:25.996 INFO: Setting log level to 20 00:30:25.996 INFO: Log level set to 20 00:30:25.996 INFO: Log level set to 20 00:30:25.996 INFO: Requests: 00:30:25.996 { 00:30:25.996 "jsonrpc": "2.0", 00:30:25.996 "method": "framework_start_init", 00:30:25.996 "id": 1 00:30:25.996 } 00:30:25.996 00:30:25.996 INFO: Requests: 00:30:25.996 { 00:30:25.996 "jsonrpc": "2.0", 00:30:25.996 "method": "framework_start_init", 00:30:25.996 "id": 1 00:30:25.996 } 00:30:25.996 00:30:25.996 [2024-07-25 10:19:04.858639] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:25.996 INFO: response: 00:30:25.996 { 00:30:25.996 "jsonrpc": "2.0", 00:30:25.996 "id": 1, 00:30:25.996 "result": true 00:30:25.996 } 00:30:25.996 00:30:25.996 INFO: response: 00:30:25.996 { 00:30:25.996 "jsonrpc": "2.0", 00:30:25.996 "id": 1, 00:30:25.996 "result": true 00:30:25.996 } 00:30:25.996 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.996 10:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.996 INFO: Setting log level to 40 00:30:25.996 INFO: Setting log level to 40 00:30:25.996 INFO: Setting log level to 40 00:30:25.996 [2024-07-25 10:19:04.871964] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:25.996 10:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:25.996 10:19:04 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.996 10:19:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.257 Nvme0n1 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.257 [2024-07-25 10:19:05.253553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.257 [ 00:30:26.257 { 00:30:26.257 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:26.257 "subtype": "Discovery", 00:30:26.257 "listen_addresses": [], 00:30:26.257 "allow_any_host": true, 00:30:26.257 "hosts": [] 00:30:26.257 }, 00:30:26.257 { 00:30:26.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:26.257 "subtype": "NVMe", 00:30:26.257 "listen_addresses": [ 00:30:26.257 { 00:30:26.257 "trtype": "TCP", 00:30:26.257 "adrfam": "IPv4", 00:30:26.257 "traddr": "10.0.0.2", 00:30:26.257 "trsvcid": "4420" 00:30:26.257 } 00:30:26.257 ], 00:30:26.257 "allow_any_host": true, 00:30:26.257 "hosts": [], 00:30:26.257 "serial_number": "SPDK00000000000001", 00:30:26.257 "model_number": "SPDK bdev Controller", 00:30:26.257 "max_namespaces": 1, 00:30:26.257 "min_cntlid": 1, 00:30:26.257 "max_cntlid": 65519, 00:30:26.257 "namespaces": [ 00:30:26.257 { 00:30:26.257 "nsid": 1, 00:30:26.257 "bdev_name": "Nvme0n1", 00:30:26.257 "name": "Nvme0n1", 00:30:26.257 "nguid": "36344730526054870025384500000044", 00:30:26.257 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:26.257 } 00:30:26.257 ] 00:30:26.257 } 00:30:26.257 ] 00:30:26.257 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:26.257 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:26.257 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:26.519 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:26.519 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:26.519 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.519 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:26.520 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.520 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:26.520 10:19:05 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:26.520 rmmod nvme_tcp 00:30:26.520 rmmod nvme_fabrics 00:30:26.520 rmmod nvme_keyring 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1489980 ']' 00:30:26.520 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1489980 00:30:26.520 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 1489980 ']' 00:30:26.520 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 1489980 00:30:26.520 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:30:26.520 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:26.520 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1489980 00:30:26.781 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:26.781 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:26.781 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1489980' 00:30:26.781 killing process with pid 1489980 00:30:26.781 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 1489980 00:30:26.781 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 1489980 00:30:27.043 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:27.043 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:27.043 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:27.043 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:27.043 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:27.043 10:19:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.043 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:27.043 10:19:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.957 10:19:08 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:28.957 00:30:28.957 real 0m12.101s 00:30:28.957 user 0m9.432s 00:30:28.957 sys 0m5.754s 00:30:28.957 10:19:08 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:28.957 10:19:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:28.957 ************************************ 00:30:28.957 END TEST nvmf_identify_passthru 00:30:28.957 ************************************ 00:30:28.957 10:19:08 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:28.957 10:19:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:28.957 10:19:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:28.957 10:19:08 -- common/autotest_common.sh@10 -- # set +x 00:30:29.220 ************************************ 00:30:29.220 START TEST nvmf_dif 00:30:29.220 ************************************ 00:30:29.220 10:19:08 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:29.220 * Looking for test storage... 00:30:29.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.220 10:19:08 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.220 10:19:08 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.220 10:19:08 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.220 10:19:08 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.220 10:19:08 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.220 10:19:08 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.220 10:19:08 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.220 10:19:08 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:29.220 10:19:08 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:29.220 10:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:29.220 10:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:29.220 10:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:29.220 10:19:08 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:29.220 10:19:08 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.220 10:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:29.220 10:19:08 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:29.220 10:19:08 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:29.220 10:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:37.368 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:37.368 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:37.368 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:37.368 10:19:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:37.369 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:37.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:30:37.369 00:30:37.369 --- 10.0.0.2 ping statistics --- 00:30:37.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.369 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:30:37.369 00:30:37.369 --- 10.0.0.1 ping statistics --- 00:30:37.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.369 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:37.369 10:19:15 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:39.918 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:30:39.918 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:30:39.918 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:40.180 10:19:19 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:40.180 10:19:19 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1496109 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1496109 00:30:40.180 10:19:19 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 1496109 ']' 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:40.180 10:19:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:40.180 [2024-07-25 10:19:19.223936] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:40.180 [2024-07-25 10:19:19.224003] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.180 EAL: No free 2048 kB hugepages reported on node 1 00:30:40.180 [2024-07-25 10:19:19.294898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.441 [2024-07-25 10:19:19.368530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.441 [2024-07-25 10:19:19.368568] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.441 [2024-07-25 10:19:19.368576] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.441 [2024-07-25 10:19:19.368582] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.441 [2024-07-25 10:19:19.368588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.441 [2024-07-25 10:19:19.368612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:41.012 10:19:19 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:41.012 10:19:19 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:30:41.012 10:19:19 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:41.012 10:19:19 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:41.012 10:19:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 10:19:20 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.012 10:19:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:41.012 10:19:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:41.012 10:19:20 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.012 10:19:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 [2024-07-25 10:19:20.039414] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.012 10:19:20 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.012 10:19:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:41.012 10:19:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:41.012 10:19:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:41.012 10:19:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 ************************************ 00:30:41.012 START TEST fio_dif_1_default 00:30:41.012 ************************************ 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 bdev_null0 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:41.012 [2024-07-25 10:19:20.107733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:41.012 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:41.013 { 00:30:41.013 "params": { 00:30:41.013 "name": "Nvme$subsystem", 00:30:41.013 "trtype": "$TEST_TRANSPORT", 00:30:41.013 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:41.013 "adrfam": "ipv4", 00:30:41.013 "trsvcid": "$NVMF_PORT", 00:30:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:41.013 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:41.013 "hdgst": ${hdgst:-false}, 00:30:41.013 "ddgst": ${ddgst:-false} 00:30:41.013 }, 00:30:41.013 "method": "bdev_nvme_attach_controller" 00:30:41.013 } 00:30:41.013 EOF 00:30:41.013 )") 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:41.013 10:19:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:41.013 "params": { 00:30:41.013 "name": "Nvme0", 00:30:41.013 "trtype": "tcp", 00:30:41.013 "traddr": "10.0.0.2", 00:30:41.013 "adrfam": "ipv4", 00:30:41.013 "trsvcid": "4420", 00:30:41.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.013 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:41.013 "hdgst": false, 00:30:41.013 "ddgst": false 00:30:41.013 }, 00:30:41.013 "method": "bdev_nvme_attach_controller" 00:30:41.013 }' 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:41.307 10:19:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:41.571 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:41.571 fio-3.35 00:30:41.571 Starting 1 thread 00:30:41.571 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.802 00:30:53.802 filename0: (groupid=0, jobs=1): err= 0: pid=1496643: Thu Jul 25 10:19:31 2024 00:30:53.802 read: IOPS=181, BW=725KiB/s (743kB/s)(7264KiB/10014msec) 00:30:53.802 slat (nsec): min=5370, max=54580, avg=6196.28, stdev=1793.23 00:30:53.802 clat (usec): min=1161, max=43886, avg=22040.17, stdev=20365.89 00:30:53.802 lat (usec): min=1167, max=43923, avg=22046.36, stdev=20365.86 00:30:53.802 clat percentiles (usec): 00:30:53.802 | 1.00th=[ 1385], 5.00th=[ 1532], 10.00th=[ 1565], 20.00th=[ 1598], 00:30:53.802 | 30.00th=[ 1614], 40.00th=[ 1631], 50.00th=[41681], 60.00th=[42206], 00:30:53.802 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:53.802 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:30:53.802 | 99.99th=[43779] 00:30:53.802 bw ( KiB/s): min= 672, max= 768, per=99.81%, avg=724.80, stdev=33.28, samples=20 00:30:53.802 iops : min= 168, max= 192, avg=181.20, stdev= 8.32, samples=20 00:30:53.802 lat (msec) : 2=49.78%, 50=50.22% 00:30:53.802 cpu : usr=95.82%, sys=3.97%, ctx=19, majf=0, minf=245 00:30:53.802 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:53.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:53.802 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:53.802 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:53.802 00:30:53.802 Run status group 0 (all jobs): 00:30:53.802 READ: bw=725KiB/s (743kB/s), 725KiB/s-725KiB/s (743kB/s-743kB/s), io=7264KiB (7438kB), run=10014-10014msec 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 00:30:53.802 real 0m11.134s 00:30:53.802 user 0m22.045s 00:30:53.802 sys 0m0.709s 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 ************************************ 00:30:53.802 END TEST fio_dif_1_default 00:30:53.802 ************************************ 00:30:53.802 10:19:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:53.802 10:19:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:53.802 10:19:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 ************************************ 00:30:53.802 START TEST fio_dif_1_multi_subsystems 00:30:53.802 ************************************ 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 bdev_null0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 [2024-07-25 10:19:31.322646] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 bdev_null1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.802 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.803 { 00:30:53.803 "params": { 00:30:53.803 "name": "Nvme$subsystem", 00:30:53.803 "trtype": "$TEST_TRANSPORT", 00:30:53.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.803 "adrfam": "ipv4", 00:30:53.803 "trsvcid": "$NVMF_PORT", 00:30:53.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.803 "hdgst": ${hdgst:-false}, 00:30:53.803 "ddgst": ${ddgst:-false} 00:30:53.803 }, 00:30:53.803 "method": "bdev_nvme_attach_controller" 00:30:53.803 } 00:30:53.803 EOF 00:30:53.803 )") 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:53.803 { 00:30:53.803 "params": { 00:30:53.803 "name": "Nvme$subsystem", 00:30:53.803 "trtype": "$TEST_TRANSPORT", 00:30:53.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:53.803 "adrfam": "ipv4", 00:30:53.803 "trsvcid": "$NVMF_PORT", 00:30:53.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:53.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:53.803 "hdgst": ${hdgst:-false}, 00:30:53.803 "ddgst": ${ddgst:-false} 00:30:53.803 }, 00:30:53.803 "method": "bdev_nvme_attach_controller" 00:30:53.803 } 00:30:53.803 EOF 00:30:53.803 )") 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:53.803 "params": { 00:30:53.803 "name": "Nvme0", 00:30:53.803 "trtype": "tcp", 00:30:53.803 "traddr": "10.0.0.2", 00:30:53.803 "adrfam": "ipv4", 00:30:53.803 "trsvcid": "4420", 00:30:53.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:53.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:53.803 "hdgst": false, 00:30:53.803 "ddgst": false 00:30:53.803 }, 00:30:53.803 "method": "bdev_nvme_attach_controller" 00:30:53.803 },{ 00:30:53.803 "params": { 00:30:53.803 "name": "Nvme1", 00:30:53.803 "trtype": "tcp", 00:30:53.803 "traddr": "10.0.0.2", 00:30:53.803 "adrfam": "ipv4", 00:30:53.803 "trsvcid": "4420", 00:30:53.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.803 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:53.803 "hdgst": false, 00:30:53.803 "ddgst": false 00:30:53.803 }, 00:30:53.803 "method": "bdev_nvme_attach_controller" 00:30:53.803 }' 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:53.803 10:19:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:53.803 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.803 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:53.803 fio-3.35 00:30:53.803 Starting 2 threads 00:30:53.803 EAL: No free 2048 kB hugepages reported on node 1 00:31:03.804 00:31:03.804 filename0: (groupid=0, jobs=1): err= 0: pid=1498872: Thu Jul 25 10:19:42 2024 00:31:03.804 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:31:03.804 slat (nsec): min=5386, max=38120, avg=6383.14, stdev=1718.40 00:31:03.804 clat (usec): min=41767, max=43195, avg=41987.64, stdev=101.70 00:31:03.804 lat (usec): min=41773, max=43233, avg=41994.02, stdev=102.29 00:31:03.804 clat percentiles (usec): 00:31:03.804 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:31:03.804 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:03.804 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:03.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:03.805 | 99.99th=[43254] 00:31:03.805 bw ( KiB/s): min= 352, max= 384, per=34.41%, avg=380.80, stdev= 9.85, samples=20 00:31:03.805 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:03.805 lat (msec) : 50=100.00% 00:31:03.805 cpu : usr=97.11%, sys=2.68%, ctx=13, majf=0, minf=164 00:31:03.805 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.805 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.805 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:03.805 filename1: (groupid=0, jobs=1): err= 0: pid=1498873: Thu Jul 25 10:19:42 2024 00:31:03.805 read: IOPS=181, BW=724KiB/s (742kB/s)(7264KiB/10029msec) 00:31:03.805 slat (nsec): min=5369, max=32860, avg=6378.79, stdev=1426.19 00:31:03.805 clat (usec): min=1423, max=43310, avg=22071.46, stdev=20366.61 00:31:03.805 lat (usec): min=1428, max=43343, avg=22077.84, stdev=20366.61 00:31:03.805 clat percentiles (usec): 00:31:03.805 | 1.00th=[ 1500], 5.00th=[ 1582], 10.00th=[ 1598], 20.00th=[ 1614], 00:31:03.805 | 30.00th=[ 1614], 40.00th=[ 1631], 50.00th=[41681], 60.00th=[42206], 00:31:03.805 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:03.805 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:03.805 | 99.99th=[43254] 00:31:03.805 bw ( KiB/s): min= 672, max= 768, per=65.56%, avg=724.80, stdev=31.62, samples=20 00:31:03.805 iops : min= 168, max= 192, avg=181.20, stdev= 7.90, samples=20 00:31:03.805 lat (msec) : 2=49.56%, 4=0.22%, 50=50.22% 00:31:03.805 cpu : usr=96.86%, sys=2.93%, ctx=15, majf=0, minf=85 00:31:03.805 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.805 issued rwts: total=1816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.805 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:03.805 00:31:03.805 Run status group 0 (all jobs): 00:31:03.805 READ: bw=1104KiB/s (1131kB/s), 381KiB/s-724KiB/s (390kB/s-742kB/s), io=10.8MiB (11.4MB), run=10029-10040msec 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 00:31:03.805 real 0m11.344s 00:31:03.805 user 0m33.178s 00:31:03.805 sys 0m0.864s 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 ************************************ 00:31:03.805 END TEST fio_dif_1_multi_subsystems 00:31:03.805 ************************************ 00:31:03.805 10:19:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:03.805 10:19:42 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:03.805 10:19:42 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 ************************************ 00:31:03.805 START TEST fio_dif_rand_params 00:31:03.805 ************************************ 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 bdev_null0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:03.805 [2024-07-25 10:19:42.757541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:03.805 { 00:31:03.805 "params": { 00:31:03.805 "name": "Nvme$subsystem", 00:31:03.805 "trtype": "$TEST_TRANSPORT", 00:31:03.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:03.805 "adrfam": "ipv4", 00:31:03.805 "trsvcid": "$NVMF_PORT", 00:31:03.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:03.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:03.805 "hdgst": ${hdgst:-false}, 00:31:03.805 "ddgst": ${ddgst:-false} 00:31:03.805 }, 00:31:03.805 "method": "bdev_nvme_attach_controller" 00:31:03.805 } 00:31:03.805 EOF 00:31:03.805 )") 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:03.805 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:03.806 "params": { 00:31:03.806 "name": "Nvme0", 00:31:03.806 "trtype": "tcp", 00:31:03.806 "traddr": "10.0.0.2", 00:31:03.806 "adrfam": "ipv4", 00:31:03.806 "trsvcid": "4420", 00:31:03.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:03.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:03.806 "hdgst": false, 00:31:03.806 "ddgst": false 00:31:03.806 }, 00:31:03.806 "method": "bdev_nvme_attach_controller" 00:31:03.806 }' 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:03.806 10:19:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:04.116 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:04.116 ... 00:31:04.116 fio-3.35 00:31:04.116 Starting 3 threads 00:31:04.116 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.717 00:31:10.717 filename0: (groupid=0, jobs=1): err= 0: pid=1501269: Thu Jul 25 10:19:48 2024 00:31:10.717 read: IOPS=86, BW=10.8MiB/s (11.3MB/s)(54.4MiB/5032msec) 00:31:10.717 slat (nsec): min=5383, max=52031, avg=8210.38, stdev=3064.32 00:31:10.717 clat (usec): min=8198, max=58170, avg=34684.70, stdev=21345.84 00:31:10.717 lat (usec): min=8206, max=58179, avg=34692.91, stdev=21345.88 00:31:10.717 clat percentiles (usec): 00:31:10.717 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9634], 20.00th=[10552], 00:31:10.717 | 30.00th=[11863], 40.00th=[13173], 50.00th=[52167], 60.00th=[53216], 00:31:10.717 | 70.00th=[53740], 80.00th=[54264], 90.00th=[54789], 95.00th=[55837], 00:31:10.717 | 99.00th=[57410], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:31:10.717 | 99.99th=[57934] 00:31:10.717 bw ( KiB/s): min= 8448, max=17664, per=34.71%, avg=11059.20, stdev=2900.83, samples=10 00:31:10.717 iops : min= 66, max= 138, avg=86.40, stdev=22.66, samples=10 00:31:10.717 lat (msec) : 10=12.87%, 20=31.95%, 50=0.69%, 100=54.48% 00:31:10.717 cpu : usr=97.12%, sys=2.54%, ctx=7, majf=0, minf=158 00:31:10.717 IO depths : 1=12.9%, 2=87.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.717 issued rwts: total=435,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.717 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:10.717 filename0: (groupid=0, jobs=1): err= 0: pid=1501270: Thu Jul 25 10:19:48 2024 00:31:10.717 read: IOPS=79, BW=9.90MiB/s (10.4MB/s)(49.9MiB/5038msec) 00:31:10.717 slat (nsec): min=5391, max=36207, avg=7894.58, stdev=2862.28 00:31:10.717 clat (usec): min=8110, max=57308, avg=37861.92, stdev=20557.65 00:31:10.717 lat (usec): min=8119, max=57317, avg=37869.81, stdev=20557.57 00:31:10.717 clat percentiles (usec): 00:31:10.717 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[11076], 00:31:10.717 | 30.00th=[13173], 40.00th=[51119], 50.00th=[52691], 60.00th=[53216], 00:31:10.717 | 70.00th=[53740], 80.00th=[54789], 90.00th=[54789], 95.00th=[55313], 00:31:10.717 | 99.00th=[56886], 99.50th=[57410], 99.90th=[57410], 99.95th=[57410], 00:31:10.717 | 99.99th=[57410] 00:31:10.717 bw ( KiB/s): min= 6912, max=12288, per=31.81%, avg=10135.40, stdev=1765.38, samples=10 00:31:10.717 iops : min= 54, max= 96, avg=79.10, stdev=13.76, samples=10 00:31:10.717 lat (msec) : 10=9.02%, 20=28.57%, 50=0.50%, 100=61.90% 00:31:10.717 cpu : usr=97.22%, sys=2.46%, ctx=9, majf=0, minf=130 00:31:10.717 IO depths : 1=14.8%, 2=85.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.717 issued rwts: total=399,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.717 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:10.717 filename0: (groupid=0, jobs=1): err= 0: pid=1501271: Thu Jul 25 10:19:48 2024 00:31:10.717 read: IOPS=83, BW=10.5MiB/s (11.0MB/s)(52.5MiB/5016msec) 00:31:10.717 slat (nsec): min=5388, max=36366, avg=8016.80, stdev=2463.79 00:31:10.717 clat (usec): min=8308, max=56846, avg=35810.20, stdev=20937.15 00:31:10.717 lat (usec): min=8316, max=56855, avg=35818.22, stdev=20936.83 00:31:10.717 clat percentiles (usec): 00:31:10.717 | 1.00th=[ 8586], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[10683], 00:31:10.717 | 30.00th=[11994], 40.00th=[14353], 50.00th=[51643], 60.00th=[53216], 00:31:10.717 | 70.00th=[53740], 80.00th=[54264], 90.00th=[54789], 95.00th=[55313], 00:31:10.717 | 99.00th=[56886], 99.50th=[56886], 99.90th=[56886], 99.95th=[56886], 00:31:10.717 | 99.99th=[56886] 00:31:10.717 bw ( KiB/s): min= 7680, max=15360, per=33.51%, avg=10675.20, stdev=2571.49, samples=10 00:31:10.717 iops : min= 60, max= 120, avg=83.40, stdev=20.09, samples=10 00:31:10.717 lat (msec) : 10=9.76%, 20=32.38%, 100=57.86% 00:31:10.717 cpu : usr=96.75%, sys=2.93%, ctx=7, majf=0, minf=43 00:31:10.717 IO depths : 1=12.1%, 2=87.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.717 issued rwts: total=420,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.717 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:10.717 00:31:10.717 Run status group 0 (all jobs): 00:31:10.717 READ: bw=31.1MiB/s (32.6MB/s), 9.90MiB/s-10.8MiB/s (10.4MB/s-11.3MB/s), io=157MiB (164MB), run=5016-5038msec 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:10.717 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 bdev_null0 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 [2024-07-25 10:19:48.898544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 bdev_null1 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 bdev_null2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:10.718 10:19:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.718 { 00:31:10.718 "params": { 00:31:10.718 "name": "Nvme$subsystem", 00:31:10.718 "trtype": "$TEST_TRANSPORT", 00:31:10.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.718 "adrfam": "ipv4", 00:31:10.718 "trsvcid": "$NVMF_PORT", 00:31:10.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.718 "hdgst": ${hdgst:-false}, 00:31:10.718 "ddgst": ${ddgst:-false} 00:31:10.718 }, 00:31:10.718 "method": "bdev_nvme_attach_controller" 00:31:10.718 } 00:31:10.718 EOF 00:31:10.718 )") 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.718 { 00:31:10.718 "params": { 00:31:10.718 "name": "Nvme$subsystem", 00:31:10.718 "trtype": "$TEST_TRANSPORT", 00:31:10.718 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.718 "adrfam": "ipv4", 00:31:10.718 "trsvcid": "$NVMF_PORT", 00:31:10.718 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.718 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.718 "hdgst": ${hdgst:-false}, 00:31:10.718 "ddgst": ${ddgst:-false} 00:31:10.718 }, 00:31:10.718 "method": "bdev_nvme_attach_controller" 00:31:10.718 } 00:31:10.718 EOF 00:31:10.718 )") 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.718 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.719 { 00:31:10.719 "params": { 00:31:10.719 "name": "Nvme$subsystem", 00:31:10.719 "trtype": "$TEST_TRANSPORT", 00:31:10.719 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.719 "adrfam": "ipv4", 00:31:10.719 "trsvcid": "$NVMF_PORT", 00:31:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.719 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.719 "hdgst": ${hdgst:-false}, 00:31:10.719 "ddgst": ${ddgst:-false} 00:31:10.719 }, 00:31:10.719 "method": "bdev_nvme_attach_controller" 00:31:10.719 } 00:31:10.719 EOF 00:31:10.719 )") 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:10.719 "params": { 00:31:10.719 "name": "Nvme0", 00:31:10.719 "trtype": "tcp", 00:31:10.719 "traddr": "10.0.0.2", 00:31:10.719 "adrfam": "ipv4", 00:31:10.719 "trsvcid": "4420", 00:31:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.719 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.719 "hdgst": false, 00:31:10.719 "ddgst": false 00:31:10.719 }, 00:31:10.719 "method": "bdev_nvme_attach_controller" 00:31:10.719 },{ 00:31:10.719 "params": { 00:31:10.719 "name": "Nvme1", 00:31:10.719 "trtype": "tcp", 00:31:10.719 "traddr": "10.0.0.2", 00:31:10.719 "adrfam": "ipv4", 00:31:10.719 "trsvcid": "4420", 00:31:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:10.719 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:10.719 "hdgst": false, 00:31:10.719 "ddgst": false 00:31:10.719 }, 00:31:10.719 "method": "bdev_nvme_attach_controller" 00:31:10.719 },{ 00:31:10.719 "params": { 00:31:10.719 "name": "Nvme2", 00:31:10.719 "trtype": "tcp", 00:31:10.719 "traddr": "10.0.0.2", 00:31:10.719 "adrfam": "ipv4", 00:31:10.719 "trsvcid": "4420", 00:31:10.719 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:10.719 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:10.719 "hdgst": false, 00:31:10.719 "ddgst": false 00:31:10.719 }, 00:31:10.719 "method": "bdev_nvme_attach_controller" 00:31:10.719 }' 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.719 10:19:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.719 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:10.719 ... 00:31:10.719 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:10.719 ... 00:31:10.719 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:10.719 ... 00:31:10.719 fio-3.35 00:31:10.719 Starting 24 threads 00:31:10.719 EAL: No free 2048 kB hugepages reported on node 1 00:31:22.962 00:31:22.962 filename0: (groupid=0, jobs=1): err= 0: pid=1502575: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=480, BW=1921KiB/s (1967kB/s)(18.8MiB/10018msec) 00:31:22.963 slat (nsec): min=5526, max=74596, avg=12721.31, stdev=9524.07 00:31:22.963 clat (usec): min=15463, max=63584, avg=33226.70, stdev=6705.69 00:31:22.963 lat (usec): min=15472, max=63590, avg=33239.43, stdev=6706.29 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[19006], 5.00th=[21627], 10.00th=[24249], 20.00th=[30278], 00:31:22.963 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32113], 60.00th=[32637], 00:31:22.963 | 70.00th=[33424], 80.00th=[39584], 90.00th=[42730], 95.00th=[44827], 00:31:22.963 | 99.00th=[51119], 99.50th=[55313], 99.90th=[57410], 99.95th=[63701], 00:31:22.963 | 99.99th=[63701] 00:31:22.963 bw ( KiB/s): min= 1664, max= 2144, per=4.03%, avg=1918.11, stdev=109.52, samples=19 00:31:22.963 iops : min= 416, max= 536, avg=479.53, stdev=27.38, samples=19 00:31:22.963 lat (msec) : 20=2.06%, 50=96.57%, 100=1.37% 00:31:22.963 cpu : usr=98.91%, sys=0.80%, ctx=9, majf=0, minf=41 00:31:22.963 IO depths : 1=2.5%, 2=4.9%, 4=13.7%, 8=67.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:22.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 complete : 0=0.0%, 4=91.5%, 8=4.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 issued rwts: total=4812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.963 filename0: (groupid=0, jobs=1): err= 0: pid=1502576: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=511, BW=2046KiB/s (2095kB/s)(20.0MiB/10009msec) 00:31:22.963 slat (nsec): min=5539, max=73248, avg=11447.04, stdev=7807.80 00:31:22.963 clat (usec): min=12788, max=34598, avg=31177.08, stdev=2702.65 00:31:22.963 lat (usec): min=12798, max=34605, avg=31188.53, stdev=2703.54 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[19530], 5.00th=[23725], 10.00th=[30016], 20.00th=[30540], 00:31:22.963 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:31:22.963 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:22.963 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:31:22.963 | 99.99th=[34341] 00:31:22.963 bw ( KiB/s): min= 1920, max= 2176, per=4.30%, avg=2047.95, stdev=60.36, samples=19 00:31:22.963 iops : min= 480, max= 544, avg=511.95, stdev=15.09, samples=19 00:31:22.963 lat (msec) : 20=1.60%, 50=98.40% 00:31:22.963 cpu : usr=99.14%, sys=0.57%, ctx=58, majf=0, minf=26 00:31:22.963 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.963 filename0: (groupid=0, jobs=1): err= 0: pid=1502577: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=468, BW=1873KiB/s (1918kB/s)(18.3MiB/10007msec) 00:31:22.963 slat (nsec): min=5530, max=68698, avg=11673.91, stdev=8155.03 00:31:22.963 clat (usec): min=11763, max=65972, avg=34103.41, stdev=6510.07 00:31:22.963 lat (usec): min=11769, max=65988, avg=34115.08, stdev=6509.74 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[19792], 5.00th=[24249], 10.00th=[28443], 20.00th=[30540], 00:31:22.963 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:31:22.963 | 70.00th=[33817], 80.00th=[40633], 90.00th=[43254], 95.00th=[46400], 00:31:22.963 | 99.00th=[51643], 99.50th=[52691], 99.90th=[65799], 99.95th=[65799], 00:31:22.963 | 99.99th=[65799] 00:31:22.963 bw ( KiB/s): min= 1612, max= 1944, per=3.91%, avg=1860.58, stdev=76.35, samples=19 00:31:22.963 iops : min= 403, max= 486, avg=465.11, stdev=19.13, samples=19 00:31:22.963 lat (msec) : 20=1.30%, 50=97.05%, 100=1.64% 00:31:22.963 cpu : usr=98.69%, sys=0.96%, ctx=66, majf=0, minf=30 00:31:22.963 IO depths : 1=1.0%, 2=2.0%, 4=10.2%, 8=73.1%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:22.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 complete : 0=0.0%, 4=90.7%, 8=5.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 issued rwts: total=4685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.963 filename0: (groupid=0, jobs=1): err= 0: pid=1502578: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=514, BW=2059KiB/s (2108kB/s)(20.1MiB/10010msec) 00:31:22.963 slat (nsec): min=5540, max=71746, avg=13657.48, stdev=9639.85 00:31:22.963 clat (usec): min=13531, max=34538, avg=30968.54, stdev=3078.23 00:31:22.963 lat (usec): min=13539, max=34546, avg=30982.19, stdev=3079.56 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[19006], 5.00th=[22676], 10.00th=[29754], 20.00th=[30540], 00:31:22.963 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31589], 60.00th=[32113], 00:31:22.963 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:22.963 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:31:22.963 | 99.99th=[34341] 00:31:22.963 bw ( KiB/s): min= 1920, max= 2176, per=4.31%, avg=2054.47, stdev=79.55, samples=19 00:31:22.963 iops : min= 480, max= 544, avg=513.58, stdev=19.90, samples=19 00:31:22.963 lat (msec) : 20=2.14%, 50=97.86% 00:31:22.963 cpu : usr=99.19%, sys=0.50%, ctx=64, majf=0, minf=29 00:31:22.963 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 issued rwts: total=5152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.963 filename0: (groupid=0, jobs=1): err= 0: pid=1502579: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=504, BW=2019KiB/s (2067kB/s)(19.8MiB/10018msec) 00:31:22.963 slat (nsec): min=5570, max=67595, avg=11762.38, stdev=8930.12 00:31:22.963 clat (usec): min=16054, max=44542, avg=31601.22, stdev=1939.28 00:31:22.963 lat (usec): min=16063, max=44566, avg=31612.98, stdev=1939.73 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[21627], 5.00th=[29754], 10.00th=[30278], 20.00th=[30802], 00:31:22.963 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:22.963 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:22.963 | 99.00th=[34341], 99.50th=[35390], 99.90th=[35914], 99.95th=[39584], 00:31:22.963 | 99.99th=[44303] 00:31:22.963 bw ( KiB/s): min= 1904, max= 2052, per=4.23%, avg=2014.90, stdev=58.23, samples=20 00:31:22.963 iops : min= 476, max= 513, avg=503.65, stdev=14.52, samples=20 00:31:22.963 lat (msec) : 20=0.47%, 50=99.53% 00:31:22.963 cpu : usr=99.22%, sys=0.51%, ctx=11, majf=0, minf=31 00:31:22.963 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:22.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.963 filename0: (groupid=0, jobs=1): err= 0: pid=1502580: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=511, BW=2045KiB/s (2094kB/s)(20.0MiB/10024msec) 00:31:22.963 slat (nsec): min=5540, max=67011, avg=11525.61, stdev=8463.55 00:31:22.963 clat (usec): min=13618, max=57641, avg=31210.01, stdev=4068.21 00:31:22.963 lat (usec): min=13636, max=57647, avg=31221.53, stdev=4068.79 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[18220], 5.00th=[21627], 10.00th=[29492], 20.00th=[30540], 00:31:22.963 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31851], 60.00th=[32113], 00:31:22.963 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:22.963 | 99.00th=[43254], 99.50th=[44827], 99.90th=[57410], 99.95th=[57410], 00:31:22.963 | 99.99th=[57410] 00:31:22.963 bw ( KiB/s): min= 1920, max= 2288, per=4.29%, avg=2045.15, stdev=103.25, samples=20 00:31:22.963 iops : min= 480, max= 572, avg=511.25, stdev=25.83, samples=20 00:31:22.963 lat (msec) : 20=2.95%, 50=96.82%, 100=0.23% 00:31:22.963 cpu : usr=98.97%, sys=0.75%, ctx=12, majf=0, minf=36 00:31:22.963 IO depths : 1=2.7%, 2=6.4%, 4=17.2%, 8=63.3%, 16=10.4%, 32=0.0%, >=64=0.0% 00:31:22.963 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 complete : 0=0.0%, 4=92.2%, 8=2.6%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.963 issued rwts: total=5124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.963 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.963 filename0: (groupid=0, jobs=1): err= 0: pid=1502581: Thu Jul 25 10:20:00 2024 00:31:22.963 read: IOPS=501, BW=2008KiB/s (2056kB/s)(19.6MiB/10010msec) 00:31:22.963 slat (nsec): min=5569, max=60751, avg=11530.92, stdev=7732.35 00:31:22.963 clat (usec): min=11347, max=58435, avg=31780.05, stdev=2068.69 00:31:22.963 lat (usec): min=11353, max=58458, avg=31791.58, stdev=2068.91 00:31:22.963 clat percentiles (usec): 00:31:22.963 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:31:22.963 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:22.964 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33162], 95.00th=[33817], 00:31:22.964 | 99.00th=[34866], 99.50th=[35914], 99.90th=[47973], 99.95th=[58459], 00:31:22.964 | 99.99th=[58459] 00:31:22.964 bw ( KiB/s): min= 1916, max= 2048, per=4.20%, avg=2000.37, stdev=63.53, samples=19 00:31:22.964 iops : min= 479, max= 512, avg=500.05, stdev=15.86, samples=19 00:31:22.964 lat (msec) : 20=0.38%, 50=99.52%, 100=0.10% 00:31:22.964 cpu : usr=99.18%, sys=0.55%, ctx=13, majf=0, minf=30 00:31:22.964 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.964 filename0: (groupid=0, jobs=1): err= 0: pid=1502582: Thu Jul 25 10:20:00 2024 00:31:22.964 read: IOPS=469, BW=1876KiB/s (1921kB/s)(18.4MiB/10020msec) 00:31:22.964 slat (nsec): min=5535, max=61324, avg=11003.63, stdev=7656.19 00:31:22.964 clat (usec): min=15962, max=61977, avg=34032.10, stdev=5988.87 00:31:22.964 lat (usec): min=15971, max=61984, avg=34043.11, stdev=5987.56 00:31:22.964 clat percentiles (usec): 00:31:22.964 | 1.00th=[18482], 5.00th=[24511], 10.00th=[30016], 20.00th=[30802], 00:31:22.964 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:31:22.964 | 70.00th=[33817], 80.00th=[41157], 90.00th=[43254], 95.00th=[44303], 00:31:22.964 | 99.00th=[47449], 99.50th=[51119], 99.90th=[59507], 99.95th=[59507], 00:31:22.964 | 99.99th=[62129] 00:31:22.964 bw ( KiB/s): min= 1532, max= 2111, per=3.94%, avg=1876.35, stdev=187.38, samples=20 00:31:22.964 iops : min= 383, max= 527, avg=469.05, stdev=46.79, samples=20 00:31:22.964 lat (msec) : 20=1.89%, 50=97.43%, 100=0.68% 00:31:22.964 cpu : usr=98.93%, sys=0.71%, ctx=84, majf=0, minf=44 00:31:22.964 IO depths : 1=1.6%, 2=3.4%, 4=12.5%, 8=71.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 complete : 0=0.0%, 4=91.0%, 8=3.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 issued rwts: total=4700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.964 filename1: (groupid=0, jobs=1): err= 0: pid=1502583: Thu Jul 25 10:20:00 2024 00:31:22.964 read: IOPS=470, BW=1880KiB/s (1925kB/s)(18.4MiB/10022msec) 00:31:22.964 slat (nsec): min=5536, max=69625, avg=10703.86, stdev=7291.73 00:31:22.964 clat (usec): min=17416, max=62770, avg=33972.88, stdev=6358.54 00:31:22.964 lat (usec): min=17436, max=62776, avg=33983.58, stdev=6358.80 00:31:22.964 clat percentiles (usec): 00:31:22.964 | 1.00th=[20841], 5.00th=[23725], 10.00th=[27132], 20.00th=[30540], 00:31:22.964 | 30.00th=[31065], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:31:22.964 | 70.00th=[34341], 80.00th=[40633], 90.00th=[43254], 95.00th=[45351], 00:31:22.964 | 99.00th=[50070], 99.50th=[52167], 99.90th=[54264], 99.95th=[62653], 00:31:22.964 | 99.99th=[62653] 00:31:22.964 bw ( KiB/s): min= 1650, max= 2024, per=3.94%, avg=1877.05, stdev=84.39, samples=20 00:31:22.964 iops : min= 412, max= 506, avg=469.20, stdev=21.21, samples=20 00:31:22.964 lat (msec) : 20=0.68%, 50=98.05%, 100=1.27% 00:31:22.964 cpu : usr=98.89%, sys=0.76%, ctx=71, majf=0, minf=34 00:31:22.964 IO depths : 1=1.5%, 2=3.1%, 4=11.9%, 8=70.9%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 complete : 0=0.0%, 4=91.0%, 8=4.9%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 issued rwts: total=4711,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.964 filename1: (groupid=0, jobs=1): err= 0: pid=1502584: Thu Jul 25 10:20:00 2024 00:31:22.964 read: IOPS=455, BW=1821KiB/s (1865kB/s)(17.8MiB/10019msec) 00:31:22.964 slat (nsec): min=5543, max=85334, avg=10305.10, stdev=7659.09 00:31:22.964 clat (usec): min=15954, max=64492, avg=35066.21, stdev=5948.44 00:31:22.964 lat (usec): min=15961, max=64499, avg=35076.52, stdev=5946.87 00:31:22.964 clat percentiles (usec): 00:31:22.964 | 1.00th=[19530], 5.00th=[29754], 10.00th=[30540], 20.00th=[31327], 00:31:22.964 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[33162], 00:31:22.964 | 70.00th=[38536], 80.00th=[41681], 90.00th=[43779], 95.00th=[44303], 00:31:22.964 | 99.00th=[51119], 99.50th=[53216], 99.90th=[60556], 99.95th=[64226], 00:31:22.964 | 99.99th=[64750] 00:31:22.964 bw ( KiB/s): min= 1408, max= 2048, per=3.81%, avg=1816.84, stdev=204.21, samples=19 00:31:22.964 iops : min= 352, max= 512, avg=454.21, stdev=51.05, samples=19 00:31:22.964 lat (msec) : 20=1.07%, 50=97.90%, 100=1.03% 00:31:22.964 cpu : usr=93.33%, sys=2.98%, ctx=162, majf=0, minf=42 00:31:22.964 IO depths : 1=1.5%, 2=3.3%, 4=11.5%, 8=70.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 complete : 0=0.0%, 4=92.2%, 8=2.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 issued rwts: total=4561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.964 filename1: (groupid=0, jobs=1): err= 0: pid=1502585: Thu Jul 25 10:20:00 2024 00:31:22.964 read: IOPS=460, BW=1842KiB/s (1886kB/s)(18.0MiB/10007msec) 00:31:22.964 slat (nsec): min=5550, max=57851, avg=8447.87, stdev=5121.60 00:31:22.964 clat (usec): min=10888, max=70982, avg=34698.55, stdev=6564.45 00:31:22.964 lat (usec): min=10894, max=70999, avg=34707.00, stdev=6564.33 00:31:22.964 clat percentiles (usec): 00:31:22.964 | 1.00th=[20579], 5.00th=[24511], 10.00th=[29754], 20.00th=[31065], 00:31:22.964 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32900], 60.00th=[33424], 00:31:22.964 | 70.00th=[36439], 80.00th=[41157], 90.00th=[43254], 95.00th=[45351], 00:31:22.964 | 99.00th=[53740], 99.50th=[60556], 99.90th=[65799], 99.95th=[70779], 00:31:22.964 | 99.99th=[70779] 00:31:22.964 bw ( KiB/s): min= 1667, max= 1968, per=3.86%, avg=1838.68, stdev=96.30, samples=19 00:31:22.964 iops : min= 416, max= 492, avg=459.63, stdev=24.15, samples=19 00:31:22.964 lat (msec) : 20=0.89%, 50=97.13%, 100=1.98% 00:31:22.964 cpu : usr=98.95%, sys=0.73%, ctx=14, majf=0, minf=27 00:31:22.964 IO depths : 1=0.7%, 2=1.8%, 4=8.9%, 8=74.4%, 16=14.2%, 32=0.0%, >=64=0.0% 00:31:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 complete : 0=0.0%, 4=90.7%, 8=5.8%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 issued rwts: total=4607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.964 filename1: (groupid=0, jobs=1): err= 0: pid=1502586: Thu Jul 25 10:20:00 2024 00:31:22.964 read: IOPS=716, BW=2867KiB/s (2936kB/s)(28.0MiB/10001msec) 00:31:22.964 slat (nsec): min=5536, max=39412, avg=6623.13, stdev=2025.72 00:31:22.964 clat (usec): min=7995, max=33835, avg=22264.22, stdev=3726.08 00:31:22.964 lat (usec): min=8007, max=33841, avg=22270.84, stdev=3726.19 00:31:22.964 clat percentiles (usec): 00:31:22.964 | 1.00th=[14877], 5.00th=[17957], 10.00th=[18744], 20.00th=[19792], 00:31:22.964 | 30.00th=[20317], 40.00th=[21103], 50.00th=[21627], 60.00th=[22414], 00:31:22.964 | 70.00th=[22938], 80.00th=[23462], 90.00th=[30016], 95.00th=[31327], 00:31:22.964 | 99.00th=[32637], 99.50th=[32637], 99.90th=[33817], 99.95th=[33817], 00:31:22.964 | 99.99th=[33817] 00:31:22.964 bw ( KiB/s): min= 2554, max= 3072, per=6.01%, avg=2862.84, stdev=137.08, samples=19 00:31:22.964 iops : min= 638, max= 768, avg=715.68, stdev=34.33, samples=19 00:31:22.964 lat (msec) : 10=0.32%, 20=23.79%, 50=75.89% 00:31:22.964 cpu : usr=99.00%, sys=0.71%, ctx=51, majf=0, minf=46 00:31:22.964 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.964 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.964 issued rwts: total=7168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.964 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.965 filename1: (groupid=0, jobs=1): err= 0: pid=1502587: Thu Jul 25 10:20:00 2024 00:31:22.965 read: IOPS=408, BW=1633KiB/s (1672kB/s)(16.0MiB/10006msec) 00:31:22.965 slat (nsec): min=5553, max=71522, avg=11634.33, stdev=7908.24 00:31:22.965 clat (usec): min=7113, max=65789, avg=39126.50, stdev=6118.58 00:31:22.965 lat (usec): min=7119, max=65808, avg=39138.13, stdev=6119.15 00:31:22.965 clat percentiles (usec): 00:31:22.965 | 1.00th=[21103], 5.00th=[30278], 10.00th=[31327], 20.00th=[32637], 00:31:22.965 | 30.00th=[37487], 40.00th=[40109], 50.00th=[41157], 60.00th=[42206], 00:31:22.965 | 70.00th=[42730], 80.00th=[43254], 90.00th=[44303], 95.00th=[46400], 00:31:22.965 | 99.00th=[50594], 99.50th=[52167], 99.90th=[65799], 99.95th=[65799], 00:31:22.965 | 99.99th=[65799] 00:31:22.965 bw ( KiB/s): min= 1440, max= 1904, per=3.41%, avg=1625.84, stdev=163.52, samples=19 00:31:22.965 iops : min= 360, max= 476, avg=406.42, stdev=40.90, samples=19 00:31:22.965 lat (msec) : 10=0.34%, 20=0.39%, 50=97.94%, 100=1.32% 00:31:22.965 cpu : usr=98.54%, sys=1.06%, ctx=138, majf=0, minf=22 00:31:22.965 IO depths : 1=0.5%, 2=1.1%, 4=14.8%, 8=70.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 complete : 0=0.0%, 4=92.7%, 8=2.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 issued rwts: total=4084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.965 filename1: (groupid=0, jobs=1): err= 0: pid=1502588: Thu Jul 25 10:20:00 2024 00:31:22.965 read: IOPS=492, BW=1968KiB/s (2015kB/s)(19.3MiB/10018msec) 00:31:22.965 slat (nsec): min=5558, max=68035, avg=12123.62, stdev=8103.74 00:31:22.965 clat (usec): min=16718, max=52974, avg=32433.25, stdev=6080.32 00:31:22.965 lat (usec): min=16724, max=52994, avg=32445.37, stdev=6081.02 00:31:22.965 clat percentiles (usec): 00:31:22.965 | 1.00th=[18220], 5.00th=[21365], 10.00th=[24249], 20.00th=[30016], 00:31:22.965 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32113], 60.00th=[32375], 00:31:22.965 | 70.00th=[32900], 80.00th=[33817], 90.00th=[42206], 95.00th=[44303], 00:31:22.965 | 99.00th=[49546], 99.50th=[51119], 99.90th=[52691], 99.95th=[52691], 00:31:22.965 | 99.99th=[53216] 00:31:22.965 bw ( KiB/s): min= 1792, max= 2176, per=4.12%, avg=1964.10, stdev=89.66, samples=20 00:31:22.965 iops : min= 448, max= 544, avg=490.95, stdev=22.41, samples=20 00:31:22.965 lat (msec) : 20=2.86%, 50=96.25%, 100=0.89% 00:31:22.965 cpu : usr=98.46%, sys=1.13%, ctx=92, majf=0, minf=32 00:31:22.965 IO depths : 1=2.2%, 2=5.3%, 4=17.1%, 8=64.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:31:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 complete : 0=0.0%, 4=92.4%, 8=2.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 issued rwts: total=4929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.965 filename1: (groupid=0, jobs=1): err= 0: pid=1502589: Thu Jul 25 10:20:00 2024 00:31:22.965 read: IOPS=502, BW=2008KiB/s (2056kB/s)(19.6MiB/10007msec) 00:31:22.965 slat (nsec): min=5551, max=71602, avg=12386.65, stdev=9653.47 00:31:22.965 clat (usec): min=11559, max=52144, avg=31753.51, stdev=2152.46 00:31:22.965 lat (usec): min=11564, max=52162, avg=31765.89, stdev=2152.70 00:31:22.965 clat percentiles (usec): 00:31:22.965 | 1.00th=[29230], 5.00th=[29754], 10.00th=[30278], 20.00th=[30802], 00:31:22.965 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:22.965 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33817], 00:31:22.965 | 99.00th=[34866], 99.50th=[34866], 99.90th=[52167], 99.95th=[52167], 00:31:22.965 | 99.99th=[52167] 00:31:22.965 bw ( KiB/s): min= 1916, max= 2048, per=4.20%, avg=2000.63, stdev=63.73, samples=19 00:31:22.965 iops : min= 479, max= 512, avg=500.16, stdev=15.93, samples=19 00:31:22.965 lat (msec) : 20=0.64%, 50=99.04%, 100=0.32% 00:31:22.965 cpu : usr=99.20%, sys=0.54%, ctx=9, majf=0, minf=31 00:31:22.965 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.965 filename1: (groupid=0, jobs=1): err= 0: pid=1502590: Thu Jul 25 10:20:00 2024 00:31:22.965 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10005msec) 00:31:22.965 slat (nsec): min=5396, max=71810, avg=12424.41, stdev=9590.92 00:31:22.965 clat (usec): min=7007, max=61362, avg=33726.17, stdev=6773.70 00:31:22.965 lat (usec): min=7013, max=61369, avg=33738.59, stdev=6773.37 00:31:22.965 clat percentiles (usec): 00:31:22.965 | 1.00th=[19530], 5.00th=[22938], 10.00th=[27919], 20.00th=[30802], 00:31:22.965 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32113], 60.00th=[32637], 00:31:22.965 | 70.00th=[33424], 80.00th=[39060], 90.00th=[43254], 95.00th=[46924], 00:31:22.965 | 99.00th=[52691], 99.50th=[54264], 99.90th=[61080], 99.95th=[61604], 00:31:22.965 | 99.99th=[61604] 00:31:22.965 bw ( KiB/s): min= 1664, max= 1968, per=3.96%, avg=1887.74, stdev=63.43, samples=19 00:31:22.965 iops : min= 416, max= 492, avg=471.89, stdev=15.84, samples=19 00:31:22.965 lat (msec) : 10=0.30%, 20=1.31%, 50=95.59%, 100=2.81% 00:31:22.965 cpu : usr=98.97%, sys=0.73%, ctx=23, majf=0, minf=53 00:31:22.965 IO depths : 1=0.8%, 2=1.6%, 4=10.1%, 8=73.9%, 16=13.7%, 32=0.0%, >=64=0.0% 00:31:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 complete : 0=0.0%, 4=90.7%, 8=5.6%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 issued rwts: total=4737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.965 filename2: (groupid=0, jobs=1): err= 0: pid=1502591: Thu Jul 25 10:20:00 2024 00:31:22.965 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.3MiB/10015msec) 00:31:22.965 slat (usec): min=5, max=269, avg=12.91, stdev=11.20 00:31:22.965 clat (usec): min=14129, max=61332, avg=32380.81, stdev=6319.82 00:31:22.965 lat (usec): min=14135, max=61342, avg=32393.73, stdev=6320.46 00:31:22.965 clat percentiles (usec): 00:31:22.965 | 1.00th=[17695], 5.00th=[21103], 10.00th=[23987], 20.00th=[29754], 00:31:22.965 | 30.00th=[31065], 40.00th=[31589], 50.00th=[32113], 60.00th=[32637], 00:31:22.965 | 70.00th=[33162], 80.00th=[34341], 90.00th=[41157], 95.00th=[42730], 00:31:22.965 | 99.00th=[52167], 99.50th=[53740], 99.90th=[61080], 99.95th=[61080], 00:31:22.965 | 99.99th=[61080] 00:31:22.965 bw ( KiB/s): min= 1792, max= 2224, per=4.13%, avg=1968.85, stdev=124.88, samples=20 00:31:22.965 iops : min= 448, max= 556, avg=492.10, stdev=31.22, samples=20 00:31:22.965 lat (msec) : 20=2.86%, 50=95.74%, 100=1.40% 00:31:22.965 cpu : usr=95.01%, sys=2.50%, ctx=57, majf=0, minf=39 00:31:22.965 IO depths : 1=1.5%, 2=3.3%, 4=12.1%, 8=71.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:22.965 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.965 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.965 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.965 filename2: (groupid=0, jobs=1): err= 0: pid=1502592: Thu Jul 25 10:20:00 2024 00:31:22.965 read: IOPS=519, BW=2076KiB/s (2126kB/s)(20.3MiB/10019msec) 00:31:22.965 slat (nsec): min=5576, max=72029, avg=12681.46, stdev=9687.29 00:31:22.965 clat (usec): min=16357, max=34705, avg=30718.44, stdev=3389.96 00:31:22.965 lat (usec): min=16365, max=34728, avg=30731.12, stdev=3391.13 00:31:22.965 clat percentiles (usec): 00:31:22.965 | 1.00th=[19268], 5.00th=[21365], 10.00th=[23987], 20.00th=[30278], 00:31:22.965 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:31:22.965 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[33424], 00:31:22.965 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:31:22.965 | 99.99th=[34866] 00:31:22.965 bw ( KiB/s): min= 1920, max= 2304, per=4.36%, avg=2074.68, stdev=91.39, samples=19 00:31:22.965 iops : min= 480, max= 576, avg=518.63, stdev=22.86, samples=19 00:31:22.966 lat (msec) : 20=2.23%, 50=97.77% 00:31:22.966 cpu : usr=98.76%, sys=0.88%, ctx=145, majf=0, minf=27 00:31:22.966 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 filename2: (groupid=0, jobs=1): err= 0: pid=1502593: Thu Jul 25 10:20:00 2024 00:31:22.966 read: IOPS=486, BW=1948KiB/s (1994kB/s)(19.0MiB/10006msec) 00:31:22.966 slat (nsec): min=5416, max=66078, avg=11932.31, stdev=9138.22 00:31:22.966 clat (usec): min=7283, max=64042, avg=32778.01, stdev=4665.42 00:31:22.966 lat (usec): min=7289, max=64058, avg=32789.94, stdev=4664.68 00:31:22.966 clat percentiles (usec): 00:31:22.966 | 1.00th=[20317], 5.00th=[29754], 10.00th=[30278], 20.00th=[30802], 00:31:22.966 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32375], 00:31:22.966 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34866], 95.00th=[42730], 00:31:22.966 | 99.00th=[54264], 99.50th=[58459], 99.90th=[63177], 99.95th=[64226], 00:31:22.966 | 99.99th=[64226] 00:31:22.966 bw ( KiB/s): min= 1504, max= 2048, per=4.07%, avg=1938.21, stdev=134.92, samples=19 00:31:22.966 iops : min= 376, max= 512, avg=484.47, stdev=33.73, samples=19 00:31:22.966 lat (msec) : 10=0.12%, 20=0.84%, 50=97.72%, 100=1.31% 00:31:22.966 cpu : usr=99.18%, sys=0.51%, ctx=105, majf=0, minf=35 00:31:22.966 IO depths : 1=0.6%, 2=4.1%, 4=16.5%, 8=64.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=93.5%, 8=1.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=4872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 filename2: (groupid=0, jobs=1): err= 0: pid=1502594: Thu Jul 25 10:20:00 2024 00:31:22.966 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10016msec) 00:31:22.966 slat (nsec): min=5552, max=69640, avg=12625.39, stdev=9388.72 00:31:22.966 clat (usec): min=14412, max=63639, avg=33765.72, stdev=6412.03 00:31:22.966 lat (usec): min=14419, max=63646, avg=33778.34, stdev=6412.03 00:31:22.966 clat percentiles (usec): 00:31:22.966 | 1.00th=[18744], 5.00th=[22414], 10.00th=[27657], 20.00th=[30540], 00:31:22.966 | 30.00th=[31327], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:31:22.966 | 70.00th=[33817], 80.00th=[39584], 90.00th=[43254], 95.00th=[45876], 00:31:22.966 | 99.00th=[51119], 99.50th=[51643], 99.90th=[58983], 99.95th=[61604], 00:31:22.966 | 99.99th=[63701] 00:31:22.966 bw ( KiB/s): min= 1764, max= 1972, per=3.96%, avg=1886.80, stdev=55.32, samples=20 00:31:22.966 iops : min= 441, max= 493, avg=471.70, stdev=13.83, samples=20 00:31:22.966 lat (msec) : 20=1.71%, 50=97.02%, 100=1.27% 00:31:22.966 cpu : usr=98.84%, sys=0.75%, ctx=29, majf=0, minf=40 00:31:22.966 IO depths : 1=1.6%, 2=3.5%, 4=12.6%, 8=69.7%, 16=12.7%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=91.4%, 8=4.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=4735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 filename2: (groupid=0, jobs=1): err= 0: pid=1502595: Thu Jul 25 10:20:00 2024 00:31:22.966 read: IOPS=455, BW=1824KiB/s (1868kB/s)(17.8MiB/10003msec) 00:31:22.966 slat (nsec): min=5545, max=71706, avg=13125.90, stdev=10003.28 00:31:22.966 clat (usec): min=15898, max=61615, avg=35008.59, stdev=6758.07 00:31:22.966 lat (usec): min=15904, max=61639, avg=35021.72, stdev=6757.12 00:31:22.966 clat percentiles (usec): 00:31:22.966 | 1.00th=[19006], 5.00th=[23725], 10.00th=[29754], 20.00th=[30802], 00:31:22.966 | 30.00th=[31589], 40.00th=[32113], 50.00th=[32637], 60.00th=[33424], 00:31:22.966 | 70.00th=[39584], 80.00th=[42206], 90.00th=[44303], 95.00th=[45351], 00:31:22.966 | 99.00th=[52691], 99.50th=[55313], 99.90th=[58983], 99.95th=[61604], 00:31:22.966 | 99.99th=[61604] 00:31:22.966 bw ( KiB/s): min= 1536, max= 1944, per=3.85%, avg=1832.79, stdev=110.98, samples=19 00:31:22.966 iops : min= 384, max= 486, avg=458.16, stdev=27.77, samples=19 00:31:22.966 lat (msec) : 20=1.62%, 50=96.82%, 100=1.56% 00:31:22.966 cpu : usr=98.77%, sys=0.90%, ctx=68, majf=0, minf=36 00:31:22.966 IO depths : 1=1.5%, 2=3.7%, 4=13.2%, 8=68.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=92.0%, 8=4.0%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=4561,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 filename2: (groupid=0, jobs=1): err= 0: pid=1502596: Thu Jul 25 10:20:00 2024 00:31:22.966 read: IOPS=591, BW=2367KiB/s (2423kB/s)(23.1MiB/10006msec) 00:31:22.966 slat (nsec): min=5541, max=69507, avg=9315.52, stdev=6582.52 00:31:22.966 clat (usec): min=6139, max=34730, avg=26964.41, stdev=5483.84 00:31:22.966 lat (usec): min=6151, max=34749, avg=26973.73, stdev=5485.86 00:31:22.966 clat percentiles (usec): 00:31:22.966 | 1.00th=[16057], 5.00th=[19006], 10.00th=[19792], 20.00th=[21103], 00:31:22.966 | 30.00th=[22152], 40.00th=[23725], 50.00th=[30278], 60.00th=[30802], 00:31:22.966 | 70.00th=[31589], 80.00th=[32113], 90.00th=[32637], 95.00th=[33162], 00:31:22.966 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:31:22.966 | 99.99th=[34866] 00:31:22.966 bw ( KiB/s): min= 1920, max= 2816, per=4.95%, avg=2357.63, stdev=305.93, samples=19 00:31:22.966 iops : min= 480, max= 704, avg=589.37, stdev=76.52, samples=19 00:31:22.966 lat (msec) : 10=0.27%, 20=10.10%, 50=89.63% 00:31:22.966 cpu : usr=99.19%, sys=0.54%, ctx=17, majf=0, minf=28 00:31:22.966 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 filename2: (groupid=0, jobs=1): err= 0: pid=1502597: Thu Jul 25 10:20:00 2024 00:31:22.966 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10014msec) 00:31:22.966 slat (nsec): min=5535, max=72255, avg=11554.80, stdev=7705.62 00:31:22.966 clat (usec): min=14450, max=62654, avg=32641.69, stdev=4981.20 00:31:22.966 lat (usec): min=14456, max=62660, avg=32653.25, stdev=4980.99 00:31:22.966 clat percentiles (usec): 00:31:22.966 | 1.00th=[20055], 5.00th=[26346], 10.00th=[29754], 20.00th=[30540], 00:31:22.966 | 30.00th=[31065], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:22.966 | 70.00th=[32637], 80.00th=[33162], 90.00th=[40109], 95.00th=[43779], 00:31:22.966 | 99.00th=[49546], 99.50th=[52167], 99.90th=[60031], 99.95th=[62653], 00:31:22.966 | 99.99th=[62653] 00:31:22.966 bw ( KiB/s): min= 1792, max= 2048, per=4.10%, avg=1953.15, stdev=80.93, samples=20 00:31:22.966 iops : min= 448, max= 512, avg=488.25, stdev=20.19, samples=20 00:31:22.966 lat (msec) : 20=1.02%, 50=98.30%, 100=0.67% 00:31:22.966 cpu : usr=99.11%, sys=0.61%, ctx=7, majf=0, minf=38 00:31:22.966 IO depths : 1=3.6%, 2=7.2%, 4=17.8%, 8=61.6%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=92.3%, 8=2.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=4894,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 filename2: (groupid=0, jobs=1): err= 0: pid=1502598: Thu Jul 25 10:20:00 2024 00:31:22.966 read: IOPS=472, BW=1890KiB/s (1936kB/s)(18.5MiB/10015msec) 00:31:22.966 slat (nsec): min=5546, max=71116, avg=11340.78, stdev=8679.55 00:31:22.966 clat (usec): min=13509, max=61602, avg=33791.84, stdev=5550.99 00:31:22.966 lat (usec): min=13515, max=61609, avg=33803.18, stdev=5550.94 00:31:22.966 clat percentiles (usec): 00:31:22.966 | 1.00th=[20579], 5.00th=[27395], 10.00th=[30016], 20.00th=[31065], 00:31:22.966 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32375], 60.00th=[32900], 00:31:22.966 | 70.00th=[33817], 80.00th=[38011], 90.00th=[41681], 95.00th=[43779], 00:31:22.966 | 99.00th=[51643], 99.50th=[54789], 99.90th=[60556], 99.95th=[61604], 00:31:22.966 | 99.99th=[61604] 00:31:22.966 bw ( KiB/s): min= 1696, max= 2000, per=3.97%, avg=1890.00, stdev=79.34, samples=20 00:31:22.966 iops : min= 424, max= 500, avg=472.50, stdev=19.83, samples=20 00:31:22.966 lat (msec) : 20=0.87%, 50=97.70%, 100=1.44% 00:31:22.966 cpu : usr=98.92%, sys=0.72%, ctx=52, majf=0, minf=36 00:31:22.966 IO depths : 1=0.3%, 2=0.8%, 4=5.3%, 8=78.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:31:22.966 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 complete : 0=0.0%, 4=90.0%, 8=7.2%, 16=2.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:22.966 issued rwts: total=4733,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:22.966 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:22.966 00:31:22.966 Run status group 0 (all jobs): 00:31:22.966 READ: bw=46.5MiB/s (48.8MB/s), 1633KiB/s-2867KiB/s (1672kB/s-2936kB/s), io=466MiB (489MB), run=10001-10024msec 00:31:22.966 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:22.966 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:22.966 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.966 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:22.966 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:22.966 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 bdev_null0 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 [2024-07-25 10:20:00.500313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 bdev_null1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:22.967 { 00:31:22.967 "params": { 00:31:22.967 "name": "Nvme$subsystem", 00:31:22.967 "trtype": "$TEST_TRANSPORT", 00:31:22.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.967 "adrfam": "ipv4", 00:31:22.967 "trsvcid": "$NVMF_PORT", 00:31:22.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.967 "hdgst": ${hdgst:-false}, 00:31:22.967 "ddgst": ${ddgst:-false} 00:31:22.967 }, 00:31:22.967 "method": "bdev_nvme_attach_controller" 00:31:22.967 } 00:31:22.967 EOF 00:31:22.967 )") 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:22.967 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:22.968 { 00:31:22.968 "params": { 00:31:22.968 "name": "Nvme$subsystem", 00:31:22.968 "trtype": "$TEST_TRANSPORT", 00:31:22.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:22.968 "adrfam": "ipv4", 00:31:22.968 "trsvcid": "$NVMF_PORT", 00:31:22.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:22.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:22.968 "hdgst": ${hdgst:-false}, 00:31:22.968 "ddgst": ${ddgst:-false} 00:31:22.968 }, 00:31:22.968 "method": "bdev_nvme_attach_controller" 00:31:22.968 } 00:31:22.968 EOF 00:31:22.968 )") 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:22.968 "params": { 00:31:22.968 "name": "Nvme0", 00:31:22.968 "trtype": "tcp", 00:31:22.968 "traddr": "10.0.0.2", 00:31:22.968 "adrfam": "ipv4", 00:31:22.968 "trsvcid": "4420", 00:31:22.968 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:22.968 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:22.968 "hdgst": false, 00:31:22.968 "ddgst": false 00:31:22.968 }, 00:31:22.968 "method": "bdev_nvme_attach_controller" 00:31:22.968 },{ 00:31:22.968 "params": { 00:31:22.968 "name": "Nvme1", 00:31:22.968 "trtype": "tcp", 00:31:22.968 "traddr": "10.0.0.2", 00:31:22.968 "adrfam": "ipv4", 00:31:22.968 "trsvcid": "4420", 00:31:22.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:22.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:22.968 "hdgst": false, 00:31:22.968 "ddgst": false 00:31:22.968 }, 00:31:22.968 "method": "bdev_nvme_attach_controller" 00:31:22.968 }' 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:22.968 10:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:22.968 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:22.968 ... 00:31:22.968 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:22.968 ... 00:31:22.968 fio-3.35 00:31:22.968 Starting 4 threads 00:31:22.968 EAL: No free 2048 kB hugepages reported on node 1 00:31:28.307 00:31:28.307 filename0: (groupid=0, jobs=1): err= 0: pid=1505097: Thu Jul 25 10:20:06 2024 00:31:28.307 read: IOPS=2100, BW=16.4MiB/s (17.2MB/s)(82.7MiB/5042msec) 00:31:28.307 slat (nsec): min=5371, max=32014, avg=5998.06, stdev=1791.54 00:31:28.307 clat (usec): min=1246, max=43399, avg=3777.48, stdev=970.18 00:31:28.307 lat (usec): min=1254, max=43404, avg=3783.47, stdev=970.14 00:31:28.307 clat percentiles (usec): 00:31:28.307 | 1.00th=[ 2442], 5.00th=[ 2769], 10.00th=[ 2999], 20.00th=[ 3228], 00:31:28.307 | 30.00th=[ 3458], 40.00th=[ 3621], 50.00th=[ 3785], 60.00th=[ 3851], 00:31:28.307 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4883], 00:31:28.307 | 99.00th=[ 5407], 99.50th=[ 5604], 99.90th=[ 5997], 99.95th=[ 6325], 00:31:28.307 | 99.99th=[41681] 00:31:28.307 bw ( KiB/s): min=16560, max=17392, per=26.42%, avg=16920.70, stdev=208.97, samples=10 00:31:28.307 iops : min= 2070, max= 2174, avg=2114.90, stdev=26.16, samples=10 00:31:28.307 lat (msec) : 2=0.16%, 4=68.24%, 10=31.56%, 50=0.04% 00:31:28.307 cpu : usr=97.14%, sys=2.56%, ctx=69, majf=0, minf=42 00:31:28.307 IO depths : 1=0.1%, 2=1.3%, 4=68.2%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 complete : 0=0.0%, 4=94.3%, 8=5.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 issued rwts: total=10590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:28.307 filename0: (groupid=0, jobs=1): err= 0: pid=1505099: Thu Jul 25 10:20:06 2024 00:31:28.307 read: IOPS=1924, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5002msec) 00:31:28.307 slat (nsec): min=7830, max=36186, avg=8624.65, stdev=1753.00 00:31:28.307 clat (usec): min=2161, max=6955, avg=4133.80, stdev=672.66 00:31:28.307 lat (usec): min=2169, max=6991, avg=4142.42, stdev=672.63 00:31:28.307 clat percentiles (usec): 00:31:28.307 | 1.00th=[ 2704], 5.00th=[ 3064], 10.00th=[ 3294], 20.00th=[ 3589], 00:31:28.307 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4080], 60.00th=[ 4228], 00:31:28.307 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5276], 00:31:28.307 | 99.00th=[ 5997], 99.50th=[ 6194], 99.90th=[ 6521], 99.95th=[ 6587], 00:31:28.307 | 99.99th=[ 6980] 00:31:28.307 bw ( KiB/s): min=15152, max=15760, per=24.11%, avg=15442.89, stdev=207.74, samples=9 00:31:28.307 iops : min= 1894, max= 1970, avg=1930.33, stdev=25.97, samples=9 00:31:28.307 lat (msec) : 4=44.61%, 10=55.39% 00:31:28.307 cpu : usr=96.94%, sys=2.76%, ctx=8, majf=0, minf=58 00:31:28.307 IO depths : 1=0.1%, 2=1.0%, 4=67.9%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 complete : 0=0.0%, 4=94.4%, 8=5.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 issued rwts: total=9628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:28.307 filename1: (groupid=0, jobs=1): err= 0: pid=1505100: Thu Jul 25 10:20:06 2024 00:31:28.307 read: IOPS=1913, BW=15.0MiB/s (15.7MB/s)(74.8MiB/5003msec) 00:31:28.307 slat (nsec): min=5376, max=36038, avg=6019.46, stdev=1695.95 00:31:28.307 clat (usec): min=1732, max=47032, avg=4163.49, stdev=1414.80 00:31:28.307 lat (usec): min=1737, max=47068, avg=4169.51, stdev=1415.02 00:31:28.307 clat percentiles (usec): 00:31:28.307 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3261], 20.00th=[ 3556], 00:31:28.307 | 30.00th=[ 3785], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4293], 00:31:28.307 | 70.00th=[ 4424], 80.00th=[ 4686], 90.00th=[ 5014], 95.00th=[ 5276], 00:31:28.307 | 99.00th=[ 5866], 99.50th=[ 5997], 99.90th=[ 6652], 99.95th=[46924], 00:31:28.307 | 99.99th=[46924] 00:31:28.307 bw ( KiB/s): min=13828, max=15889, per=23.92%, avg=15321.56, stdev=604.65, samples=9 00:31:28.307 iops : min= 1728, max= 1986, avg=1915.11, stdev=75.72, samples=9 00:31:28.307 lat (msec) : 2=0.04%, 4=43.38%, 10=56.50%, 50=0.08% 00:31:28.307 cpu : usr=97.32%, sys=2.38%, ctx=10, majf=0, minf=38 00:31:28.307 IO depths : 1=0.2%, 2=1.6%, 4=67.8%, 8=30.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 issued rwts: total=9574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:28.307 filename1: (groupid=0, jobs=1): err= 0: pid=1505101: Thu Jul 25 10:20:06 2024 00:31:28.307 read: IOPS=2114, BW=16.5MiB/s (17.3MB/s)(82.6MiB/5002msec) 00:31:28.307 slat (nsec): min=5371, max=32342, avg=5979.74, stdev=1703.95 00:31:28.307 clat (usec): min=1654, max=6661, avg=3768.09, stdev=618.58 00:31:28.307 lat (usec): min=1660, max=6667, avg=3774.07, stdev=618.53 00:31:28.307 clat percentiles (usec): 00:31:28.307 | 1.00th=[ 2442], 5.00th=[ 2802], 10.00th=[ 2999], 20.00th=[ 3228], 00:31:28.307 | 30.00th=[ 3425], 40.00th=[ 3621], 50.00th=[ 3785], 60.00th=[ 3884], 00:31:28.307 | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4883], 00:31:28.307 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 6128], 99.95th=[ 6194], 00:31:28.307 | 99.99th=[ 6652] 00:31:28.307 bw ( KiB/s): min=16718, max=17136, per=26.39%, avg=16901.89, stdev=135.88, samples=9 00:31:28.307 iops : min= 2089, max= 2142, avg=2112.56, stdev=17.14, samples=9 00:31:28.307 lat (msec) : 2=0.14%, 4=67.34%, 10=32.52% 00:31:28.307 cpu : usr=97.14%, sys=2.62%, ctx=9, majf=0, minf=54 00:31:28.307 IO depths : 1=0.2%, 2=0.9%, 4=68.9%, 8=30.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:28.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:28.307 issued rwts: total=10575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:28.307 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:28.307 00:31:28.307 Run status group 0 (all jobs): 00:31:28.307 READ: bw=62.5MiB/s (65.6MB/s), 15.0MiB/s-16.5MiB/s (15.7MB/s-17.3MB/s), io=315MiB (331MB), run=5002-5042msec 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 00:31:28.308 real 0m24.188s 00:31:28.308 user 5m19.665s 00:31:28.308 sys 0m4.126s 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 ************************************ 00:31:28.308 END TEST fio_dif_rand_params 00:31:28.308 ************************************ 00:31:28.308 10:20:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:28.308 10:20:06 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:28.308 10:20:06 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 ************************************ 00:31:28.308 START TEST fio_dif_digest 00:31:28.308 ************************************ 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 bdev_null0 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:28.308 [2024-07-25 10:20:07.025522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:28.308 { 00:31:28.308 "params": { 00:31:28.308 "name": "Nvme$subsystem", 00:31:28.308 "trtype": "$TEST_TRANSPORT", 00:31:28.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:28.308 "adrfam": "ipv4", 00:31:28.308 "trsvcid": "$NVMF_PORT", 00:31:28.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:28.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:28.308 "hdgst": ${hdgst:-false}, 00:31:28.308 "ddgst": ${ddgst:-false} 00:31:28.308 }, 00:31:28.308 "method": "bdev_nvme_attach_controller" 00:31:28.308 } 00:31:28.308 EOF 00:31:28.308 )") 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.308 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:28.309 "params": { 00:31:28.309 "name": "Nvme0", 00:31:28.309 "trtype": "tcp", 00:31:28.309 "traddr": "10.0.0.2", 00:31:28.309 "adrfam": "ipv4", 00:31:28.309 "trsvcid": "4420", 00:31:28.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:28.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:28.309 "hdgst": true, 00:31:28.309 "ddgst": true 00:31:28.309 }, 00:31:28.309 "method": "bdev_nvme_attach_controller" 00:31:28.309 }' 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:28.309 10:20:07 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:28.569 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:28.569 ... 00:31:28.569 fio-3.35 00:31:28.569 Starting 3 threads 00:31:28.569 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.801 00:31:40.801 filename0: (groupid=0, jobs=1): err= 0: pid=1506377: Thu Jul 25 10:20:18 2024 00:31:40.801 read: IOPS=246, BW=30.8MiB/s (32.3MB/s)(309MiB/10049msec) 00:31:40.801 slat (nsec): min=8165, max=32624, avg=9064.65, stdev=877.89 00:31:40.801 clat (usec): min=7987, max=54965, avg=12156.98, stdev=3447.55 00:31:40.801 lat (usec): min=7996, max=54973, avg=12166.04, stdev=3447.56 00:31:40.801 clat percentiles (usec): 00:31:40.801 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10421], 00:31:40.801 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12125], 60.00th=[12518], 00:31:40.801 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13829], 95.00th=[14222], 00:31:40.801 | 99.00th=[15401], 99.50th=[51119], 99.90th=[54264], 99.95th=[54789], 00:31:40.801 | 99.99th=[54789] 00:31:40.801 bw ( KiB/s): min=28672, max=35328, per=39.08%, avg=31628.80, stdev=1725.27, samples=20 00:31:40.801 iops : min= 224, max= 276, avg=247.10, stdev=13.48, samples=20 00:31:40.801 lat (msec) : 10=14.15%, 20=85.29%, 50=0.04%, 100=0.53% 00:31:40.801 cpu : usr=96.03%, sys=3.66%, ctx=14, majf=0, minf=80 00:31:40.801 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.801 issued rwts: total=2474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.801 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:40.801 filename0: (groupid=0, jobs=1): err= 0: pid=1506378: Thu Jul 25 10:20:18 2024 00:31:40.801 read: IOPS=125, BW=15.7MiB/s (16.5MB/s)(158MiB/10046msec) 00:31:40.801 slat (nsec): min=5656, max=32045, avg=6699.77, stdev=1227.59 00:31:40.801 clat (usec): min=11253, max=99977, avg=23763.89, stdev=14723.07 00:31:40.801 lat (usec): min=11259, max=99984, avg=23770.59, stdev=14723.06 00:31:40.801 clat percentiles (msec): 00:31:40.801 | 1.00th=[ 15], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 18], 00:31:40.801 | 30.00th=[ 18], 40.00th=[ 18], 50.00th=[ 19], 60.00th=[ 19], 00:31:40.801 | 70.00th=[ 20], 80.00th=[ 21], 90.00th=[ 58], 95.00th=[ 60], 00:31:40.801 | 99.00th=[ 62], 99.50th=[ 63], 99.90th=[ 101], 99.95th=[ 101], 00:31:40.801 | 99.99th=[ 101] 00:31:40.801 bw ( KiB/s): min=12544, max=19712, per=19.98%, avg=16166.40, stdev=1923.41, samples=20 00:31:40.801 iops : min= 98, max= 154, avg=126.30, stdev=15.03, samples=20 00:31:40.801 lat (msec) : 20=79.11%, 50=7.52%, 100=13.37% 00:31:40.801 cpu : usr=96.05%, sys=3.46%, ctx=694, majf=0, minf=223 00:31:40.801 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.801 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.801 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.801 issued rwts: total=1264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.801 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:40.801 filename0: (groupid=0, jobs=1): err= 0: pid=1506379: Thu Jul 25 10:20:18 2024 00:31:40.801 read: IOPS=260, BW=32.5MiB/s (34.1MB/s)(327MiB/10047msec) 00:31:40.801 slat (nsec): min=5760, max=34333, avg=9129.75, stdev=1779.79 00:31:40.801 clat (usec): min=6295, max=53476, avg=11497.88, stdev=1843.79 00:31:40.801 lat (usec): min=6302, max=53486, avg=11507.01, stdev=1843.88 00:31:40.801 clat percentiles (usec): 00:31:40.801 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:31:40.801 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[11994], 00:31:40.801 | 70.00th=[12387], 80.00th=[12649], 90.00th=[13173], 95.00th=[13566], 00:31:40.801 | 99.00th=[14484], 99.50th=[14877], 99.90th=[19006], 99.95th=[47449], 00:31:40.801 | 99.99th=[53216] 00:31:40.801 bw ( KiB/s): min=31744, max=35072, per=41.33%, avg=33446.40, stdev=1077.43, samples=20 00:31:40.801 iops : min= 248, max= 274, avg=261.30, stdev= 8.42, samples=20 00:31:40.802 lat (msec) : 10=19.43%, 20=80.50%, 50=0.04%, 100=0.04% 00:31:40.802 cpu : usr=95.12%, sys=4.21%, ctx=11, majf=0, minf=102 00:31:40.802 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.802 issued rwts: total=2615,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.802 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:40.802 00:31:40.802 Run status group 0 (all jobs): 00:31:40.802 READ: bw=79.0MiB/s (82.9MB/s), 15.7MiB/s-32.5MiB/s (16.5MB/s-34.1MB/s), io=794MiB (833MB), run=10046-10049msec 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:40.802 00:31:40.802 real 0m11.229s 00:31:40.802 user 0m45.202s 00:31:40.802 sys 0m1.471s 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:40.802 10:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:40.802 ************************************ 00:31:40.802 END TEST fio_dif_digest 00:31:40.802 ************************************ 00:31:40.802 10:20:18 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:40.802 10:20:18 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:40.802 rmmod nvme_tcp 00:31:40.802 rmmod nvme_fabrics 00:31:40.802 rmmod nvme_keyring 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1496109 ']' 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1496109 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 1496109 ']' 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 1496109 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1496109 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1496109' 00:31:40.802 killing process with pid 1496109 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@969 -- # kill 1496109 00:31:40.802 10:20:18 nvmf_dif -- common/autotest_common.sh@974 -- # wait 1496109 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:40.802 10:20:18 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:42.720 Waiting for block devices as requested 00:31:42.720 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:43.045 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:43.045 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:43.045 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:43.045 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:43.306 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:43.306 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:43.306 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:43.567 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:31:43.567 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:31:43.828 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:31:43.828 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:31:43.828 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:31:43.828 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:31:44.089 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:31:44.089 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:31:44.089 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:31:44.352 10:20:23 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.352 10:20:23 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.352 10:20:23 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.352 10:20:23 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.352 10:20:23 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.352 10:20:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:44.352 10:20:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.900 10:20:25 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.900 00:31:46.900 real 1m17.401s 00:31:46.900 user 8m2.183s 00:31:46.900 sys 0m19.814s 00:31:46.900 10:20:25 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:46.900 10:20:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:46.900 ************************************ 00:31:46.900 END TEST nvmf_dif 00:31:46.900 ************************************ 00:31:46.900 10:20:25 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:46.900 10:20:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:46.900 10:20:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:46.900 10:20:25 -- common/autotest_common.sh@10 -- # set +x 00:31:46.900 ************************************ 00:31:46.900 START TEST nvmf_abort_qd_sizes 00:31:46.900 ************************************ 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:46.900 * Looking for test storage... 00:31:46.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.900 10:20:25 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:46.901 10:20:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:53.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:53.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:53.496 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:53.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:53.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:53.497 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:53.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:53.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:31:53.759 00:31:53.759 --- 10.0.0.2 ping statistics --- 00:31:53.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.759 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:53.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:53.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.416 ms 00:31:53.759 00:31:53.759 --- 10.0.0.1 ping statistics --- 00:31:53.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:53.759 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:53.759 10:20:32 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:57.196 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:31:57.196 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1515751 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1515751 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 1515751 ']' 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:57.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:57.456 10:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:57.456 [2024-07-25 10:20:36.497006] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:57.456 [2024-07-25 10:20:36.497043] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:57.456 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.456 [2024-07-25 10:20:36.553186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:57.717 [2024-07-25 10:20:36.619787] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:57.717 [2024-07-25 10:20:36.619824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:57.717 [2024-07-25 10:20:36.619831] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:57.717 [2024-07-25 10:20:36.619838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:57.717 [2024-07-25 10:20:36.619843] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:57.717 [2024-07-25 10:20:36.619990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:57.717 [2024-07-25 10:20:36.620121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:57.717 [2024-07-25 10:20:36.620282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.717 [2024-07-25 10:20:36.620282] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:58.288 10:20:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:58.288 ************************************ 00:31:58.288 START TEST spdk_target_abort 00:31:58.288 ************************************ 00:31:58.288 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:31:58.288 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:58.288 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:31:58.288 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.288 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.861 spdk_targetn1 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.861 [2024-07-25 10:20:37.701270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:58.861 [2024-07-25 10:20:37.741550] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.861 10:20:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:58.862 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.862 [2024-07-25 10:20:37.901489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:136 len:8 PRP1 0x2000078be000 PRP2 0x0 00:31:58.862 [2024-07-25 10:20:37.901514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:31:58.862 [2024-07-25 10:20:37.916297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:480 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:31:58.862 [2024-07-25 10:20:37.916313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:31:59.123 [2024-07-25 10:20:38.003649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2640 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:59.123 [2024-07-25 10:20:38.003667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:59.123 [2024-07-25 10:20:38.031175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3136 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:31:59.123 [2024-07-25 10:20:38.031191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0089 p:0 m:0 dnr:0 00:32:02.427 Initializing NVMe Controllers 00:32:02.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:02.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:02.427 Initialization complete. Launching workers. 00:32:02.427 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8552, failed: 4 00:32:02.427 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 4199, failed to submit 4357 00:32:02.427 success 725, unsuccess 3474, failed 0 00:32:02.427 10:20:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:02.427 10:20:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:02.427 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.427 [2024-07-25 10:20:41.185638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:32:02.427 [2024-07-25 10:20:41.185673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:32:02.427 [2024-07-25 10:20:41.216363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:170 nsid:1 lba:968 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:32:02.427 [2024-07-25 10:20:41.216388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:170 cdw0:0 sqhd:0088 p:1 m:0 dnr:0 00:32:02.428 [2024-07-25 10:20:41.261948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:2176 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:32:02.428 [2024-07-25 10:20:41.261972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:02.428 [2024-07-25 10:20:41.267251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:2312 len:8 PRP1 0x200007c4c000 PRP2 0x0 00:32:02.428 [2024-07-25 10:20:41.267276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:02.428 [2024-07-25 10:20:41.275306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2520 len:8 PRP1 0x200007c46000 PRP2 0x0 00:32:02.428 [2024-07-25 10:20:41.275327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:02.428 [2024-07-25 10:20:41.283312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:186 nsid:1 lba:2712 len:8 PRP1 0x200007c5c000 PRP2 0x0 00:32:02.428 [2024-07-25 10:20:41.283333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:186 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:03.000 [2024-07-25 10:20:42.020555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:20272 len:8 PRP1 0x200007c46000 PRP2 0x0 00:32:03.000 [2024-07-25 10:20:42.020590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:00f1 p:0 m:0 dnr:0 00:32:03.572 [2024-07-25 10:20:42.491381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:191 nsid:1 lba:30944 len:8 PRP1 0x200007c48000 PRP2 0x0 00:32:03.572 [2024-07-25 10:20:42.491409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:191 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:03.833 [2024-07-25 10:20:42.793105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:182 nsid:1 lba:37864 len:8 PRP1 0x200007c46000 PRP2 0x0 00:32:03.833 [2024-07-25 10:20:42.793132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:182 cdw0:0 sqhd:0086 p:1 m:0 dnr:0 00:32:05.220 [2024-07-25 10:20:44.178288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:179 nsid:1 lba:70040 len:8 PRP1 0x200007c4e000 PRP2 0x0 00:32:05.220 [2024-07-25 10:20:44.178318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:179 cdw0:0 sqhd:0039 p:1 m:0 dnr:0 00:32:05.481 Initializing NVMe Controllers 00:32:05.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:05.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:05.481 Initialization complete. Launching workers. 00:32:05.481 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8751, failed: 10 00:32:05.481 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1241, failed to submit 7520 00:32:05.481 success 356, unsuccess 885, failed 0 00:32:05.481 10:20:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:05.481 10:20:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:05.481 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.741 [2024-07-25 10:20:44.797978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:170 nsid:1 lba:33256 len:8 PRP1 0x2000078d0000 PRP2 0x0 00:32:05.741 [2024-07-25 10:20:44.798002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:170 cdw0:0 sqhd:00ab p:0 m:0 dnr:0 00:32:07.658 [2024-07-25 10:20:46.709761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:167 nsid:1 lba:222024 len:8 PRP1 0x2000078ca000 PRP2 0x0 00:32:07.658 [2024-07-25 10:20:46.709785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:167 cdw0:0 sqhd:00d8 p:0 m:0 dnr:0 00:32:08.602 Initializing NVMe Controllers 00:32:08.602 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:08.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:08.603 Initialization complete. Launching workers. 00:32:08.603 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37284, failed: 2 00:32:08.603 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2909, failed to submit 34377 00:32:08.603 success 649, unsuccess 2260, failed 0 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:08.603 10:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1515751 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 1515751 ']' 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 1515751 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1515751 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1515751' 00:32:10.547 killing process with pid 1515751 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 1515751 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 1515751 00:32:10.547 00:32:10.547 real 0m12.170s 00:32:10.547 user 0m49.399s 00:32:10.547 sys 0m1.976s 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.547 ************************************ 00:32:10.547 END TEST spdk_target_abort 00:32:10.547 ************************************ 00:32:10.547 10:20:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:10.547 10:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:10.547 10:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:10.547 10:20:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:10.547 ************************************ 00:32:10.547 START TEST kernel_target_abort 00:32:10.547 ************************************ 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:10.547 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:10.548 10:20:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:13.852 Waiting for block devices as requested 00:32:13.852 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:13.852 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:13.852 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.114 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:14.114 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:14.114 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.385 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:14.385 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:14.385 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:14.648 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:14.648 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:14.648 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.909 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:14.909 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:14.909 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.909 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:15.170 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:15.432 No valid GPT data, bailing 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:15.432 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:15.433 00:32:15.433 Discovery Log Number of Records 2, Generation counter 2 00:32:15.433 =====Discovery Log Entry 0====== 00:32:15.433 trtype: tcp 00:32:15.433 adrfam: ipv4 00:32:15.433 subtype: current discovery subsystem 00:32:15.433 treq: not specified, sq flow control disable supported 00:32:15.433 portid: 1 00:32:15.433 trsvcid: 4420 00:32:15.433 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:15.433 traddr: 10.0.0.1 00:32:15.433 eflags: none 00:32:15.433 sectype: none 00:32:15.433 =====Discovery Log Entry 1====== 00:32:15.433 trtype: tcp 00:32:15.433 adrfam: ipv4 00:32:15.433 subtype: nvme subsystem 00:32:15.433 treq: not specified, sq flow control disable supported 00:32:15.433 portid: 1 00:32:15.433 trsvcid: 4420 00:32:15.433 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:15.433 traddr: 10.0.0.1 00:32:15.433 eflags: none 00:32:15.433 sectype: none 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:15.433 10:20:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:15.694 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.997 Initializing NVMe Controllers 00:32:18.997 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:18.997 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:18.997 Initialization complete. Launching workers. 00:32:18.997 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38970, failed: 0 00:32:18.997 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38970, failed to submit 0 00:32:18.997 success 0, unsuccess 38970, failed 0 00:32:18.997 10:20:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:18.997 10:20:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:18.997 EAL: No free 2048 kB hugepages reported on node 1 00:32:22.304 Initializing NVMe Controllers 00:32:22.304 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:22.304 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:22.304 Initialization complete. Launching workers. 00:32:22.304 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78296, failed: 0 00:32:22.304 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19718, failed to submit 58578 00:32:22.304 success 0, unsuccess 19718, failed 0 00:32:22.304 10:21:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:22.304 10:21:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:22.304 EAL: No free 2048 kB hugepages reported on node 1 00:32:24.853 Initializing NVMe Controllers 00:32:24.853 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:24.853 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:24.853 Initialization complete. Launching workers. 00:32:24.853 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 75771, failed: 0 00:32:24.853 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18930, failed to submit 56841 00:32:24.853 success 0, unsuccess 18930, failed 0 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:24.853 10:21:03 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:28.163 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:28.163 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:28.431 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:28.431 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:30.344 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:30.344 00:32:30.344 real 0m19.767s 00:32:30.344 user 0m7.070s 00:32:30.344 sys 0m6.575s 00:32:30.344 10:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:30.344 10:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.344 ************************************ 00:32:30.344 END TEST kernel_target_abort 00:32:30.344 ************************************ 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:30.344 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:30.344 rmmod nvme_tcp 00:32:30.605 rmmod nvme_fabrics 00:32:30.605 rmmod nvme_keyring 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1515751 ']' 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1515751 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 1515751 ']' 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 1515751 00:32:30.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1515751) - No such process 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 1515751 is not found' 00:32:30.605 Process with pid 1515751 is not found 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:30.605 10:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:33.911 Waiting for block devices as requested 00:32:33.911 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:33.911 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:34.172 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:34.172 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:34.172 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:34.433 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:34.433 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:34.433 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:34.433 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:34.694 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:34.694 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:34.984 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:34.984 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:34.984 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:34.984 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:35.245 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:35.245 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:35.505 10:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:35.505 10:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:35.506 10:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:35.506 10:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:35.506 10:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.506 10:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:35.506 10:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:37.420 10:21:16 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:37.420 00:32:37.420 real 0m50.976s 00:32:37.420 user 1m1.729s 00:32:37.420 sys 0m18.982s 00:32:37.420 10:21:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:37.420 10:21:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:37.420 ************************************ 00:32:37.420 END TEST nvmf_abort_qd_sizes 00:32:37.420 ************************************ 00:32:37.682 10:21:16 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:37.682 10:21:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:37.682 10:21:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:37.682 10:21:16 -- common/autotest_common.sh@10 -- # set +x 00:32:37.682 ************************************ 00:32:37.682 START TEST keyring_file 00:32:37.682 ************************************ 00:32:37.682 10:21:16 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:32:37.682 * Looking for test storage... 00:32:37.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:37.682 10:21:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:37.682 10:21:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:37.682 10:21:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:37.682 10:21:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.682 10:21:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.682 10:21:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.682 10:21:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:37.682 10:21:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:37.682 10:21:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GWgywVTSES 00:32:37.682 10:21:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:37.682 10:21:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GWgywVTSES 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GWgywVTSES 00:32:37.944 10:21:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.GWgywVTSES 00:32:37.944 10:21:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.YLIdkAgoZ7 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:37.944 10:21:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:37.944 10:21:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:37.944 10:21:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:37.944 10:21:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:37.944 10:21:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:37.944 10:21:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.YLIdkAgoZ7 00:32:37.944 10:21:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.YLIdkAgoZ7 00:32:37.944 10:21:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.YLIdkAgoZ7 00:32:37.944 10:21:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=1526703 00:32:37.944 10:21:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1526703 00:32:37.944 10:21:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:37.944 10:21:16 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1526703 ']' 00:32:37.944 10:21:16 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.944 10:21:16 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:37.944 10:21:16 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.944 10:21:16 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:37.944 10:21:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:37.944 [2024-07-25 10:21:16.938045] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:37.944 [2024-07-25 10:21:16.938117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526703 ] 00:32:37.944 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.944 [2024-07-25 10:21:17.003932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:38.204 [2024-07-25 10:21:17.079213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:38.777 10:21:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 [2024-07-25 10:21:17.722984] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.777 null0 00:32:38.777 [2024-07-25 10:21:17.755033] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:38.777 [2024-07-25 10:21:17.755397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:38.777 [2024-07-25 10:21:17.763037] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:38.777 10:21:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 [2024-07-25 10:21:17.775069] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:38.777 request: 00:32:38.777 { 00:32:38.777 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:38.777 "secure_channel": false, 00:32:38.777 "listen_address": { 00:32:38.777 "trtype": "tcp", 00:32:38.777 "traddr": "127.0.0.1", 00:32:38.777 "trsvcid": "4420" 00:32:38.777 }, 00:32:38.777 "method": "nvmf_subsystem_add_listener", 00:32:38.777 "req_id": 1 00:32:38.777 } 00:32:38.777 Got JSON-RPC error response 00:32:38.777 response: 00:32:38.777 { 00:32:38.777 "code": -32602, 00:32:38.777 "message": "Invalid parameters" 00:32:38.777 } 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:38.777 10:21:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=1526828 00:32:38.777 10:21:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1526828 /var/tmp/bperf.sock 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1526828 ']' 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:38.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:38.777 10:21:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:38.777 10:21:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:38.777 [2024-07-25 10:21:17.829121] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:38.777 [2024-07-25 10:21:17.829169] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526828 ] 00:32:38.777 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.777 [2024-07-25 10:21:17.905181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:39.038 [2024-07-25 10:21:17.969047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:39.611 10:21:18 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:39.611 10:21:18 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:39.611 10:21:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:39.611 10:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:39.611 10:21:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YLIdkAgoZ7 00:32:39.611 10:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YLIdkAgoZ7 00:32:39.872 10:21:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:39.872 10:21:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:39.872 10:21:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:39.872 10:21:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:39.872 10:21:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.132 10:21:19 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.GWgywVTSES == \/\t\m\p\/\t\m\p\.\G\W\g\y\w\V\T\S\E\S ]] 00:32:40.132 10:21:19 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:40.132 10:21:19 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:40.132 10:21:19 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.YLIdkAgoZ7 == \/\t\m\p\/\t\m\p\.\Y\L\I\d\k\A\g\o\Z\7 ]] 00:32:40.132 10:21:19 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.132 10:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:40.393 10:21:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:40.393 10:21:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:40.393 10:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:40.393 10:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.393 10:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.393 10:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.393 10:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:40.654 10:21:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:40.654 10:21:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:40.654 10:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:40.654 [2024-07-25 10:21:19.665559] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:40.654 nvme0n1 00:32:40.654 10:21:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:40.654 10:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:40.654 10:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.654 10:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.654 10:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:40.654 10:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.916 10:21:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:40.916 10:21:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:40.916 10:21:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:40.916 10:21:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:40.916 10:21:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:40.916 10:21:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:40.916 10:21:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:41.177 10:21:20 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:41.177 10:21:20 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:41.177 Running I/O for 1 seconds... 00:32:42.121 00:32:42.121 Latency(us) 00:32:42.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.121 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:42.121 nvme0n1 : 1.04 6102.96 23.84 0.00 0.00 20876.35 3426.99 63351.47 00:32:42.121 =================================================================================================================== 00:32:42.121 Total : 6102.96 23.84 0.00 0.00 20876.35 3426.99 63351.47 00:32:42.121 0 00:32:42.121 10:21:21 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:42.121 10:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:42.382 10:21:21 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:42.382 10:21:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:42.382 10:21:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.382 10:21:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.382 10:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.382 10:21:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.644 10:21:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:42.644 10:21:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:42.644 10:21:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:42.644 10:21:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.644 10:21:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.644 10:21:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:42.644 10:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.644 10:21:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:42.644 10:21:21 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:42.644 10:21:21 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:42.644 10:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:42.905 [2024-07-25 10:21:21.853706] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_re[2024-07-25 10:21:21.853713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4170 (107)ad_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:42.905 : Transport endpoint is not connected 00:32:42.905 [2024-07-25 10:21:21.854708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e4170 (9): Bad file descriptor 00:32:42.905 [2024-07-25 10:21:21.855713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:42.905 [2024-07-25 10:21:21.855720] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:42.905 [2024-07-25 10:21:21.855725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:42.905 request: 00:32:42.905 { 00:32:42.905 "name": "nvme0", 00:32:42.905 "trtype": "tcp", 00:32:42.905 "traddr": "127.0.0.1", 00:32:42.905 "adrfam": "ipv4", 00:32:42.905 "trsvcid": "4420", 00:32:42.905 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:42.905 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:42.905 "prchk_reftag": false, 00:32:42.905 "prchk_guard": false, 00:32:42.905 "hdgst": false, 00:32:42.905 "ddgst": false, 00:32:42.905 "psk": "key1", 00:32:42.905 "method": "bdev_nvme_attach_controller", 00:32:42.905 "req_id": 1 00:32:42.905 } 00:32:42.905 Got JSON-RPC error response 00:32:42.905 response: 00:32:42.905 { 00:32:42.905 "code": -5, 00:32:42.905 "message": "Input/output error" 00:32:42.905 } 00:32:42.905 10:21:21 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:42.905 10:21:21 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:42.905 10:21:21 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:42.905 10:21:21 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:42.905 10:21:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:42.905 10:21:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:42.905 10:21:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.905 10:21:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.905 10:21:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:42.905 10:21:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:42.905 10:21:22 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:42.905 10:21:22 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:42.905 10:21:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:42.905 10:21:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:42.906 10:21:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:42.906 10:21:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:42.906 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.167 10:21:22 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:43.167 10:21:22 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:43.167 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:43.427 10:21:22 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:43.427 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:43.427 10:21:22 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:43.427 10:21:22 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:43.427 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:43.688 10:21:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:43.688 10:21:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.GWgywVTSES 00:32:43.688 10:21:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:43.688 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:43.688 [2024-07-25 10:21:22.798380] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.GWgywVTSES': 0100660 00:32:43.688 [2024-07-25 10:21:22.798396] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:43.688 request: 00:32:43.688 { 00:32:43.688 "name": "key0", 00:32:43.688 "path": "/tmp/tmp.GWgywVTSES", 00:32:43.688 "method": "keyring_file_add_key", 00:32:43.688 "req_id": 1 00:32:43.688 } 00:32:43.688 Got JSON-RPC error response 00:32:43.688 response: 00:32:43.688 { 00:32:43.688 "code": -1, 00:32:43.688 "message": "Operation not permitted" 00:32:43.688 } 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:43.688 10:21:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:43.688 10:21:22 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.GWgywVTSES 00:32:43.688 10:21:22 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:43.688 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GWgywVTSES 00:32:43.949 10:21:22 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.GWgywVTSES 00:32:43.949 10:21:22 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:43.949 10:21:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:43.949 10:21:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:43.949 10:21:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:43.949 10:21:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:43.949 10:21:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.210 10:21:23 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:44.210 10:21:23 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.210 10:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.210 [2024-07-25 10:21:23.275588] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.GWgywVTSES': No such file or directory 00:32:44.210 [2024-07-25 10:21:23.275600] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:44.210 [2024-07-25 10:21:23.275615] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:44.210 [2024-07-25 10:21:23.275620] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:44.210 [2024-07-25 10:21:23.275625] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:44.210 request: 00:32:44.210 { 00:32:44.210 "name": "nvme0", 00:32:44.210 "trtype": "tcp", 00:32:44.210 "traddr": "127.0.0.1", 00:32:44.210 "adrfam": "ipv4", 00:32:44.210 "trsvcid": "4420", 00:32:44.210 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:44.210 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:44.210 "prchk_reftag": false, 00:32:44.210 "prchk_guard": false, 00:32:44.210 "hdgst": false, 00:32:44.210 "ddgst": false, 00:32:44.210 "psk": "key0", 00:32:44.210 "method": "bdev_nvme_attach_controller", 00:32:44.210 "req_id": 1 00:32:44.210 } 00:32:44.210 Got JSON-RPC error response 00:32:44.210 response: 00:32:44.210 { 00:32:44.210 "code": -19, 00:32:44.210 "message": "No such device" 00:32:44.210 } 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:44.210 10:21:23 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:44.210 10:21:23 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:44.210 10:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:44.472 10:21:23 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mW4pPRWtdc 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:44.472 10:21:23 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:44.472 10:21:23 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:44.472 10:21:23 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:44.472 10:21:23 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:44.472 10:21:23 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:44.472 10:21:23 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mW4pPRWtdc 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mW4pPRWtdc 00:32:44.472 10:21:23 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.mW4pPRWtdc 00:32:44.472 10:21:23 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mW4pPRWtdc 00:32:44.472 10:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mW4pPRWtdc 00:32:44.732 10:21:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.732 10:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:44.732 nvme0n1 00:32:44.994 10:21:23 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:44.994 10:21:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:44.994 10:21:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:44.994 10:21:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:44.994 10:21:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:44.994 10:21:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:44.994 10:21:24 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:44.994 10:21:24 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:44.994 10:21:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:45.255 10:21:24 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:45.255 10:21:24 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:45.255 10:21:24 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:45.255 10:21:24 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.255 10:21:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:45.516 10:21:24 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:45.516 10:21:24 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:45.516 10:21:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:45.777 10:21:24 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:45.777 10:21:24 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:45.777 10:21:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:45.777 10:21:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:45.777 10:21:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mW4pPRWtdc 00:32:45.777 10:21:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mW4pPRWtdc 00:32:46.038 10:21:25 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.YLIdkAgoZ7 00:32:46.038 10:21:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.YLIdkAgoZ7 00:32:46.038 10:21:25 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:46.038 10:21:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:46.298 nvme0n1 00:32:46.298 10:21:25 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:46.298 10:21:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:46.560 10:21:25 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:46.560 "subsystems": [ 00:32:46.560 { 00:32:46.560 "subsystem": "keyring", 00:32:46.560 "config": [ 00:32:46.560 { 00:32:46.560 "method": "keyring_file_add_key", 00:32:46.560 "params": { 00:32:46.560 "name": "key0", 00:32:46.560 "path": "/tmp/tmp.mW4pPRWtdc" 00:32:46.560 } 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "method": "keyring_file_add_key", 00:32:46.560 "params": { 00:32:46.560 "name": "key1", 00:32:46.560 "path": "/tmp/tmp.YLIdkAgoZ7" 00:32:46.560 } 00:32:46.560 } 00:32:46.560 ] 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "subsystem": "iobuf", 00:32:46.560 "config": [ 00:32:46.560 { 00:32:46.560 "method": "iobuf_set_options", 00:32:46.560 "params": { 00:32:46.560 "small_pool_count": 8192, 00:32:46.560 "large_pool_count": 1024, 00:32:46.560 "small_bufsize": 8192, 00:32:46.560 "large_bufsize": 135168 00:32:46.560 } 00:32:46.560 } 00:32:46.560 ] 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "subsystem": "sock", 00:32:46.560 "config": [ 00:32:46.560 { 00:32:46.560 "method": "sock_set_default_impl", 00:32:46.560 "params": { 00:32:46.560 "impl_name": "posix" 00:32:46.560 } 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "method": "sock_impl_set_options", 00:32:46.560 "params": { 00:32:46.560 "impl_name": "ssl", 00:32:46.560 "recv_buf_size": 4096, 00:32:46.560 "send_buf_size": 4096, 00:32:46.560 "enable_recv_pipe": true, 00:32:46.560 "enable_quickack": false, 00:32:46.560 "enable_placement_id": 0, 00:32:46.560 "enable_zerocopy_send_server": true, 00:32:46.560 "enable_zerocopy_send_client": false, 00:32:46.560 "zerocopy_threshold": 0, 00:32:46.560 "tls_version": 0, 00:32:46.560 "enable_ktls": false 00:32:46.560 } 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "method": "sock_impl_set_options", 00:32:46.560 "params": { 00:32:46.560 "impl_name": "posix", 00:32:46.560 "recv_buf_size": 2097152, 00:32:46.560 "send_buf_size": 2097152, 00:32:46.560 "enable_recv_pipe": true, 00:32:46.560 "enable_quickack": false, 00:32:46.560 "enable_placement_id": 0, 00:32:46.560 "enable_zerocopy_send_server": true, 00:32:46.560 "enable_zerocopy_send_client": false, 00:32:46.560 "zerocopy_threshold": 0, 00:32:46.560 "tls_version": 0, 00:32:46.560 "enable_ktls": false 00:32:46.560 } 00:32:46.560 } 00:32:46.560 ] 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "subsystem": "vmd", 00:32:46.560 "config": [] 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "subsystem": "accel", 00:32:46.560 "config": [ 00:32:46.560 { 00:32:46.560 "method": "accel_set_options", 00:32:46.560 "params": { 00:32:46.560 "small_cache_size": 128, 00:32:46.560 "large_cache_size": 16, 00:32:46.560 "task_count": 2048, 00:32:46.560 "sequence_count": 2048, 00:32:46.560 "buf_count": 2048 00:32:46.560 } 00:32:46.560 } 00:32:46.560 ] 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "subsystem": "bdev", 00:32:46.560 "config": [ 00:32:46.560 { 00:32:46.560 "method": "bdev_set_options", 00:32:46.560 "params": { 00:32:46.560 "bdev_io_pool_size": 65535, 00:32:46.560 "bdev_io_cache_size": 256, 00:32:46.560 "bdev_auto_examine": true, 00:32:46.560 "iobuf_small_cache_size": 128, 00:32:46.560 "iobuf_large_cache_size": 16 00:32:46.560 } 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "method": "bdev_raid_set_options", 00:32:46.560 "params": { 00:32:46.560 "process_window_size_kb": 1024, 00:32:46.560 "process_max_bandwidth_mb_sec": 0 00:32:46.560 } 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "method": "bdev_iscsi_set_options", 00:32:46.560 "params": { 00:32:46.560 "timeout_sec": 30 00:32:46.560 } 00:32:46.560 }, 00:32:46.560 { 00:32:46.560 "method": "bdev_nvme_set_options", 00:32:46.560 "params": { 00:32:46.560 "action_on_timeout": "none", 00:32:46.560 "timeout_us": 0, 00:32:46.560 "timeout_admin_us": 0, 00:32:46.560 "keep_alive_timeout_ms": 10000, 00:32:46.560 "arbitration_burst": 0, 00:32:46.560 "low_priority_weight": 0, 00:32:46.560 "medium_priority_weight": 0, 00:32:46.560 "high_priority_weight": 0, 00:32:46.560 "nvme_adminq_poll_period_us": 10000, 00:32:46.560 "nvme_ioq_poll_period_us": 0, 00:32:46.560 "io_queue_requests": 512, 00:32:46.560 "delay_cmd_submit": true, 00:32:46.560 "transport_retry_count": 4, 00:32:46.560 "bdev_retry_count": 3, 00:32:46.560 "transport_ack_timeout": 0, 00:32:46.560 "ctrlr_loss_timeout_sec": 0, 00:32:46.560 "reconnect_delay_sec": 0, 00:32:46.560 "fast_io_fail_timeout_sec": 0, 00:32:46.560 "disable_auto_failback": false, 00:32:46.560 "generate_uuids": false, 00:32:46.560 "transport_tos": 0, 00:32:46.560 "nvme_error_stat": false, 00:32:46.560 "rdma_srq_size": 0, 00:32:46.560 "io_path_stat": false, 00:32:46.560 "allow_accel_sequence": false, 00:32:46.560 "rdma_max_cq_size": 0, 00:32:46.561 "rdma_cm_event_timeout_ms": 0, 00:32:46.561 "dhchap_digests": [ 00:32:46.561 "sha256", 00:32:46.561 "sha384", 00:32:46.561 "sha512" 00:32:46.561 ], 00:32:46.561 "dhchap_dhgroups": [ 00:32:46.561 "null", 00:32:46.561 "ffdhe2048", 00:32:46.561 "ffdhe3072", 00:32:46.561 "ffdhe4096", 00:32:46.561 "ffdhe6144", 00:32:46.561 "ffdhe8192" 00:32:46.561 ] 00:32:46.561 } 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "method": "bdev_nvme_attach_controller", 00:32:46.561 "params": { 00:32:46.561 "name": "nvme0", 00:32:46.561 "trtype": "TCP", 00:32:46.561 "adrfam": "IPv4", 00:32:46.561 "traddr": "127.0.0.1", 00:32:46.561 "trsvcid": "4420", 00:32:46.561 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.561 "prchk_reftag": false, 00:32:46.561 "prchk_guard": false, 00:32:46.561 "ctrlr_loss_timeout_sec": 0, 00:32:46.561 "reconnect_delay_sec": 0, 00:32:46.561 "fast_io_fail_timeout_sec": 0, 00:32:46.561 "psk": "key0", 00:32:46.561 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.561 "hdgst": false, 00:32:46.561 "ddgst": false 00:32:46.561 } 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "method": "bdev_nvme_set_hotplug", 00:32:46.561 "params": { 00:32:46.561 "period_us": 100000, 00:32:46.561 "enable": false 00:32:46.561 } 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "method": "bdev_wait_for_examine" 00:32:46.561 } 00:32:46.561 ] 00:32:46.561 }, 00:32:46.561 { 00:32:46.561 "subsystem": "nbd", 00:32:46.561 "config": [] 00:32:46.561 } 00:32:46.561 ] 00:32:46.561 }' 00:32:46.561 10:21:25 keyring_file -- keyring/file.sh@114 -- # killprocess 1526828 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1526828 ']' 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1526828 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526828 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526828' 00:32:46.561 killing process with pid 1526828 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@969 -- # kill 1526828 00:32:46.561 Received shutdown signal, test time was about 1.000000 seconds 00:32:46.561 00:32:46.561 Latency(us) 00:32:46.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.561 =================================================================================================================== 00:32:46.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.561 10:21:25 keyring_file -- common/autotest_common.sh@974 -- # wait 1526828 00:32:46.822 10:21:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=1528516 00:32:46.822 10:21:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1528516 /var/tmp/bperf.sock 00:32:46.822 10:21:25 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 1528516 ']' 00:32:46.822 10:21:25 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:46.822 10:21:25 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:46.822 10:21:25 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:46.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:46.822 10:21:25 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:46.822 10:21:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:46.822 10:21:25 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:46.822 10:21:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:46.822 "subsystems": [ 00:32:46.822 { 00:32:46.822 "subsystem": "keyring", 00:32:46.822 "config": [ 00:32:46.822 { 00:32:46.822 "method": "keyring_file_add_key", 00:32:46.822 "params": { 00:32:46.822 "name": "key0", 00:32:46.822 "path": "/tmp/tmp.mW4pPRWtdc" 00:32:46.822 } 00:32:46.822 }, 00:32:46.822 { 00:32:46.823 "method": "keyring_file_add_key", 00:32:46.823 "params": { 00:32:46.823 "name": "key1", 00:32:46.823 "path": "/tmp/tmp.YLIdkAgoZ7" 00:32:46.823 } 00:32:46.823 } 00:32:46.823 ] 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "subsystem": "iobuf", 00:32:46.823 "config": [ 00:32:46.823 { 00:32:46.823 "method": "iobuf_set_options", 00:32:46.823 "params": { 00:32:46.823 "small_pool_count": 8192, 00:32:46.823 "large_pool_count": 1024, 00:32:46.823 "small_bufsize": 8192, 00:32:46.823 "large_bufsize": 135168 00:32:46.823 } 00:32:46.823 } 00:32:46.823 ] 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "subsystem": "sock", 00:32:46.823 "config": [ 00:32:46.823 { 00:32:46.823 "method": "sock_set_default_impl", 00:32:46.823 "params": { 00:32:46.823 "impl_name": "posix" 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "sock_impl_set_options", 00:32:46.823 "params": { 00:32:46.823 "impl_name": "ssl", 00:32:46.823 "recv_buf_size": 4096, 00:32:46.823 "send_buf_size": 4096, 00:32:46.823 "enable_recv_pipe": true, 00:32:46.823 "enable_quickack": false, 00:32:46.823 "enable_placement_id": 0, 00:32:46.823 "enable_zerocopy_send_server": true, 00:32:46.823 "enable_zerocopy_send_client": false, 00:32:46.823 "zerocopy_threshold": 0, 00:32:46.823 "tls_version": 0, 00:32:46.823 "enable_ktls": false 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "sock_impl_set_options", 00:32:46.823 "params": { 00:32:46.823 "impl_name": "posix", 00:32:46.823 "recv_buf_size": 2097152, 00:32:46.823 "send_buf_size": 2097152, 00:32:46.823 "enable_recv_pipe": true, 00:32:46.823 "enable_quickack": false, 00:32:46.823 "enable_placement_id": 0, 00:32:46.823 "enable_zerocopy_send_server": true, 00:32:46.823 "enable_zerocopy_send_client": false, 00:32:46.823 "zerocopy_threshold": 0, 00:32:46.823 "tls_version": 0, 00:32:46.823 "enable_ktls": false 00:32:46.823 } 00:32:46.823 } 00:32:46.823 ] 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "subsystem": "vmd", 00:32:46.823 "config": [] 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "subsystem": "accel", 00:32:46.823 "config": [ 00:32:46.823 { 00:32:46.823 "method": "accel_set_options", 00:32:46.823 "params": { 00:32:46.823 "small_cache_size": 128, 00:32:46.823 "large_cache_size": 16, 00:32:46.823 "task_count": 2048, 00:32:46.823 "sequence_count": 2048, 00:32:46.823 "buf_count": 2048 00:32:46.823 } 00:32:46.823 } 00:32:46.823 ] 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "subsystem": "bdev", 00:32:46.823 "config": [ 00:32:46.823 { 00:32:46.823 "method": "bdev_set_options", 00:32:46.823 "params": { 00:32:46.823 "bdev_io_pool_size": 65535, 00:32:46.823 "bdev_io_cache_size": 256, 00:32:46.823 "bdev_auto_examine": true, 00:32:46.823 "iobuf_small_cache_size": 128, 00:32:46.823 "iobuf_large_cache_size": 16 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "bdev_raid_set_options", 00:32:46.823 "params": { 00:32:46.823 "process_window_size_kb": 1024, 00:32:46.823 "process_max_bandwidth_mb_sec": 0 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "bdev_iscsi_set_options", 00:32:46.823 "params": { 00:32:46.823 "timeout_sec": 30 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "bdev_nvme_set_options", 00:32:46.823 "params": { 00:32:46.823 "action_on_timeout": "none", 00:32:46.823 "timeout_us": 0, 00:32:46.823 "timeout_admin_us": 0, 00:32:46.823 "keep_alive_timeout_ms": 10000, 00:32:46.823 "arbitration_burst": 0, 00:32:46.823 "low_priority_weight": 0, 00:32:46.823 "medium_priority_weight": 0, 00:32:46.823 "high_priority_weight": 0, 00:32:46.823 "nvme_adminq_poll_period_us": 10000, 00:32:46.823 "nvme_ioq_poll_period_us": 0, 00:32:46.823 "io_queue_requests": 512, 00:32:46.823 "delay_cmd_submit": true, 00:32:46.823 "transport_retry_count": 4, 00:32:46.823 "bdev_retry_count": 3, 00:32:46.823 "transport_ack_timeout": 0, 00:32:46.823 "ctrlr_loss_timeout_sec": 0, 00:32:46.823 "reconnect_delay_sec": 0, 00:32:46.823 "fast_io_fail_timeout_sec": 0, 00:32:46.823 "disable_auto_failback": false, 00:32:46.823 "generate_uuids": false, 00:32:46.823 "transport_tos": 0, 00:32:46.823 "nvme_error_stat": false, 00:32:46.823 "rdma_srq_size": 0, 00:32:46.823 "io_path_stat": false, 00:32:46.823 "allow_accel_sequence": false, 00:32:46.823 "rdma_max_cq_size": 0, 00:32:46.823 "rdma_cm_event_timeout_ms": 0, 00:32:46.823 "dhchap_digests": [ 00:32:46.823 "sha256", 00:32:46.823 "sha384", 00:32:46.823 "sha512" 00:32:46.823 ], 00:32:46.823 "dhchap_dhgroups": [ 00:32:46.823 "null", 00:32:46.823 "ffdhe2048", 00:32:46.823 "ffdhe3072", 00:32:46.823 "ffdhe4096", 00:32:46.823 "ffdhe6144", 00:32:46.823 "ffdhe8192" 00:32:46.823 ] 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "bdev_nvme_attach_controller", 00:32:46.823 "params": { 00:32:46.823 "name": "nvme0", 00:32:46.823 "trtype": "TCP", 00:32:46.823 "adrfam": "IPv4", 00:32:46.823 "traddr": "127.0.0.1", 00:32:46.823 "trsvcid": "4420", 00:32:46.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.823 "prchk_reftag": false, 00:32:46.823 "prchk_guard": false, 00:32:46.823 "ctrlr_loss_timeout_sec": 0, 00:32:46.823 "reconnect_delay_sec": 0, 00:32:46.823 "fast_io_fail_timeout_sec": 0, 00:32:46.823 "psk": "key0", 00:32:46.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:46.823 "hdgst": false, 00:32:46.823 "ddgst": false 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "bdev_nvme_set_hotplug", 00:32:46.823 "params": { 00:32:46.823 "period_us": 100000, 00:32:46.823 "enable": false 00:32:46.823 } 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "method": "bdev_wait_for_examine" 00:32:46.823 } 00:32:46.823 ] 00:32:46.823 }, 00:32:46.823 { 00:32:46.823 "subsystem": "nbd", 00:32:46.823 "config": [] 00:32:46.823 } 00:32:46.823 ] 00:32:46.823 }' 00:32:46.823 [2024-07-25 10:21:25.832114] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:46.823 [2024-07-25 10:21:25.832172] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1528516 ] 00:32:46.823 EAL: No free 2048 kB hugepages reported on node 1 00:32:46.823 [2024-07-25 10:21:25.905581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:47.085 [2024-07-25 10:21:25.958910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.085 [2024-07-25 10:21:26.100317] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:47.656 10:21:26 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:47.656 10:21:26 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:32:47.656 10:21:26 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:47.656 10:21:26 keyring_file -- keyring/file.sh@120 -- # jq length 00:32:47.656 10:21:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.656 10:21:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:32:47.656 10:21:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:32:47.656 10:21:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.656 10:21:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:47.656 10:21:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.656 10:21:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.656 10:21:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:47.917 10:21:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:32:47.917 10:21:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:32:47.917 10:21:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:47.917 10:21:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:47.917 10:21:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:47.917 10:21:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:47.917 10:21:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:32:48.179 10:21:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@1 -- # cleanup 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.mW4pPRWtdc /tmp/tmp.YLIdkAgoZ7 00:32:48.179 10:21:27 keyring_file -- keyring/file.sh@20 -- # killprocess 1528516 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1528516 ']' 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1528516 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1528516 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1528516' 00:32:48.179 killing process with pid 1528516 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@969 -- # kill 1528516 00:32:48.179 Received shutdown signal, test time was about 1.000000 seconds 00:32:48.179 00:32:48.179 Latency(us) 00:32:48.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.179 =================================================================================================================== 00:32:48.179 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:48.179 10:21:27 keyring_file -- common/autotest_common.sh@974 -- # wait 1528516 00:32:48.441 10:21:27 keyring_file -- keyring/file.sh@21 -- # killprocess 1526703 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 1526703 ']' 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@954 -- # kill -0 1526703 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@955 -- # uname 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1526703 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1526703' 00:32:48.441 killing process with pid 1526703 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@969 -- # kill 1526703 00:32:48.441 [2024-07-25 10:21:27.453067] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:32:48.441 10:21:27 keyring_file -- common/autotest_common.sh@974 -- # wait 1526703 00:32:48.703 00:32:48.703 real 0m11.034s 00:32:48.703 user 0m25.611s 00:32:48.703 sys 0m2.602s 00:32:48.703 10:21:27 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:48.703 10:21:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:48.703 ************************************ 00:32:48.703 END TEST keyring_file 00:32:48.703 ************************************ 00:32:48.703 10:21:27 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:32:48.703 10:21:27 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:48.703 10:21:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:32:48.703 10:21:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:32:48.703 10:21:27 -- common/autotest_common.sh@10 -- # set +x 00:32:48.703 ************************************ 00:32:48.703 START TEST keyring_linux 00:32:48.703 ************************************ 00:32:48.703 10:21:27 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:32:48.703 * Looking for test storage... 00:32:48.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:32:48.965 10:21:27 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:32:48.965 10:21:27 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.965 10:21:27 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:32:48.965 10:21:27 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.965 10:21:27 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.965 10:21:27 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.965 10:21:27 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.965 10:21:27 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.966 10:21:27 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.966 10:21:27 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.966 10:21:27 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.966 10:21:27 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.966 10:21:27 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.966 10:21:27 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.966 10:21:27 keyring_linux -- paths/export.sh@5 -- # export PATH 00:32:48.966 10:21:27 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:32:48.966 /tmp/:spdk-test:key0 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:32:48.966 10:21:27 keyring_linux -- nvmf/common.sh@705 -- # python - 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:32:48.966 10:21:27 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:32:48.966 /tmp/:spdk-test:key1 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1529054 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1529054 00:32:48.966 10:21:27 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:32:48.966 10:21:27 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1529054 ']' 00:32:48.966 10:21:27 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.966 10:21:27 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:48.966 10:21:27 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.966 10:21:27 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:48.966 10:21:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:48.966 [2024-07-25 10:21:28.025336] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:48.966 [2024-07-25 10:21:28.025409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529054 ] 00:32:48.966 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.966 [2024-07-25 10:21:28.088107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:49.227 [2024-07-25 10:21:28.162587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:49.799 10:21:28 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.799 [2024-07-25 10:21:28.789273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:49.799 null0 00:32:49.799 [2024-07-25 10:21:28.821326] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:49.799 [2024-07-25 10:21:28.821724] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:49.799 10:21:28 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:32:49.799 492880869 00:32:49.799 10:21:28 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:32:49.799 105541499 00:32:49.799 10:21:28 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1529079 00:32:49.799 10:21:28 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1529079 /var/tmp/bperf.sock 00:32:49.799 10:21:28 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 1529079 ']' 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:49.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:32:49.799 10:21:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:49.799 [2024-07-25 10:21:28.905302] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:32:49.799 [2024-07-25 10:21:28.905367] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1529079 ] 00:32:49.799 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.060 [2024-07-25 10:21:28.981374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.060 [2024-07-25 10:21:29.035141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.633 10:21:29 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:32:50.633 10:21:29 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:32:50.633 10:21:29 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:32:50.633 10:21:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:32:50.893 10:21:29 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:32:50.893 10:21:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:50.893 10:21:30 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:51.154 10:21:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:32:51.154 [2024-07-25 10:21:30.165661] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:51.154 nvme0n1 00:32:51.154 10:21:30 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:32:51.154 10:21:30 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:32:51.154 10:21:30 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:51.154 10:21:30 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:51.154 10:21:30 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:51.154 10:21:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.415 10:21:30 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:32:51.415 10:21:30 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:51.415 10:21:30 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:32:51.415 10:21:30 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:32:51.415 10:21:30 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:51.415 10:21:30 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:51.415 10:21:30 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@25 -- # sn=492880869 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@26 -- # [[ 492880869 == \4\9\2\8\8\0\8\6\9 ]] 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 492880869 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:32:51.687 10:21:30 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:51.687 Running I/O for 1 seconds... 00:32:52.684 00:32:52.684 Latency(us) 00:32:52.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.684 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:52.684 nvme0n1 : 1.02 6805.99 26.59 0.00 0.00 18619.51 11086.51 30583.47 00:32:52.684 =================================================================================================================== 00:32:52.684 Total : 6805.99 26.59 0.00 0.00 18619.51 11086.51 30583.47 00:32:52.684 0 00:32:52.684 10:21:31 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:52.684 10:21:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:52.945 10:21:31 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:32:52.946 10:21:31 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:32:52.946 10:21:31 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:32:52.946 10:21:31 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:32:52.946 10:21:31 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:32:52.946 10:21:31 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.946 10:21:32 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:32:52.946 10:21:32 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:32:52.946 10:21:32 keyring_linux -- keyring/linux.sh@23 -- # return 00:32:52.946 10:21:32 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:52.946 10:21:32 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:52.946 10:21:32 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:32:53.207 [2024-07-25 10:21:32.182152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.207 [2024-07-25 10:21:32.182180] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:53.207 [2024-07-25 10:21:32.183146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be20f0 (9): Bad file descriptor 00:32:53.207 [2024-07-25 10:21:32.184150] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:53.207 [2024-07-25 10:21:32.184161] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:53.207 [2024-07-25 10:21:32.184166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:53.207 request: 00:32:53.207 { 00:32:53.207 "name": "nvme0", 00:32:53.207 "trtype": "tcp", 00:32:53.207 "traddr": "127.0.0.1", 00:32:53.207 "adrfam": "ipv4", 00:32:53.207 "trsvcid": "4420", 00:32:53.207 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:53.207 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:53.207 "prchk_reftag": false, 00:32:53.207 "prchk_guard": false, 00:32:53.207 "hdgst": false, 00:32:53.207 "ddgst": false, 00:32:53.207 "psk": ":spdk-test:key1", 00:32:53.207 "method": "bdev_nvme_attach_controller", 00:32:53.207 "req_id": 1 00:32:53.207 } 00:32:53.207 Got JSON-RPC error response 00:32:53.207 response: 00:32:53.207 { 00:32:53.207 "code": -5, 00:32:53.207 "message": "Input/output error" 00:32:53.207 } 00:32:53.207 10:21:32 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:32:53.207 10:21:32 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:53.207 10:21:32 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:53.207 10:21:32 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@33 -- # sn=492880869 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 492880869 00:32:53.207 1 links removed 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:32:53.207 10:21:32 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:32:53.208 10:21:32 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:32:53.208 10:21:32 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:32:53.208 10:21:32 keyring_linux -- keyring/linux.sh@33 -- # sn=105541499 00:32:53.208 10:21:32 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 105541499 00:32:53.208 1 links removed 00:32:53.208 10:21:32 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1529079 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1529079 ']' 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1529079 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529079 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529079' 00:32:53.208 killing process with pid 1529079 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@969 -- # kill 1529079 00:32:53.208 Received shutdown signal, test time was about 1.000000 seconds 00:32:53.208 00:32:53.208 Latency(us) 00:32:53.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:53.208 =================================================================================================================== 00:32:53.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:53.208 10:21:32 keyring_linux -- common/autotest_common.sh@974 -- # wait 1529079 00:32:53.468 10:21:32 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1529054 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 1529054 ']' 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 1529054 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1529054 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1529054' 00:32:53.468 killing process with pid 1529054 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@969 -- # kill 1529054 00:32:53.468 10:21:32 keyring_linux -- common/autotest_common.sh@974 -- # wait 1529054 00:32:53.730 00:32:53.730 real 0m4.913s 00:32:53.730 user 0m8.553s 00:32:53.730 sys 0m1.202s 00:32:53.730 10:21:32 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:32:53.730 10:21:32 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:32:53.730 ************************************ 00:32:53.730 END TEST keyring_linux 00:32:53.730 ************************************ 00:32:53.730 10:21:32 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:32:53.730 10:21:32 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:32:53.730 10:21:32 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:32:53.730 10:21:32 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:32:53.730 10:21:32 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:32:53.730 10:21:32 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:32:53.730 10:21:32 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:32:53.730 10:21:32 -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:53.730 10:21:32 -- common/autotest_common.sh@10 -- # set +x 00:32:53.730 10:21:32 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:32:53.730 10:21:32 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:32:53.730 10:21:32 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:32:53.730 10:21:32 -- common/autotest_common.sh@10 -- # set +x 00:33:01.877 INFO: APP EXITING 00:33:01.877 INFO: killing all VMs 00:33:01.877 INFO: killing vhost app 00:33:01.877 WARN: no vhost pid file found 00:33:01.877 INFO: EXIT DONE 00:33:04.428 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:04.428 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:04.689 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:04.689 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:04.690 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:04.690 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:04.951 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:04.951 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:04.951 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:04.951 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:09.160 Cleaning 00:33:09.160 Removing: /var/run/dpdk/spdk0/config 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:09.160 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:09.160 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:09.160 Removing: /var/run/dpdk/spdk1/config 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:09.160 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:09.160 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:09.160 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:09.160 Removing: /var/run/dpdk/spdk2/config 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:09.160 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:09.160 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:09.160 Removing: /var/run/dpdk/spdk3/config 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:09.160 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:09.160 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:09.160 Removing: /var/run/dpdk/spdk4/config 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:09.160 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:09.160 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:09.160 Removing: /dev/shm/bdev_svc_trace.1 00:33:09.160 Removing: /dev/shm/nvmf_trace.0 00:33:09.160 Removing: /dev/shm/spdk_tgt_trace.pid1075845 00:33:09.160 Removing: /var/run/dpdk/spdk0 00:33:09.160 Removing: /var/run/dpdk/spdk1 00:33:09.160 Removing: /var/run/dpdk/spdk2 00:33:09.160 Removing: /var/run/dpdk/spdk3 00:33:09.160 Removing: /var/run/dpdk/spdk4 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1074288 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1075845 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1076363 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1077420 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1077739 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1078900 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1079141 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1079429 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1080397 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1081169 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1081551 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1081856 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1082138 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1082420 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1082780 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1083128 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1083499 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1084577 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1088088 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1088374 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1088711 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1088893 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1089264 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1089490 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1089974 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1090026 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1090349 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1090685 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1090735 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1091061 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1091497 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1091851 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1092151 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1096704 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1101782 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1113770 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1114595 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1120124 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1120499 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1125691 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1132586 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1135722 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1148170 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1159032 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1161195 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1162215 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1183244 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1187909 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1241914 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1248333 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1255522 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1262754 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1262823 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1263850 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1264863 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1265928 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1266553 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1266608 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1266910 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1266955 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1266957 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1267959 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1268963 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1270043 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1270665 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1270806 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1271051 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1272410 00:33:09.160 Removing: /var/run/dpdk/spdk_pid1273818 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1284365 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1316054 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1321787 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1323790 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1326095 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1326161 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1326484 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1326524 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1327216 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1329367 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1330322 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1331003 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1333594 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1334420 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1335134 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1339906 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1352110 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1356921 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1364150 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1365722 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1367709 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1372830 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1377853 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1386891 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1386919 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1391959 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1392051 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1392308 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1392895 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1392973 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1398344 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1398969 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1404337 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1407484 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1414066 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1420607 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1431085 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1439463 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1439507 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1461887 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1462576 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1463308 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1464131 00:33:09.161 Removing: /var/run/dpdk/spdk_pid1465141 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1465910 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1466687 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1467375 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1472417 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1472793 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1480344 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1480716 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1483235 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1490327 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1490338 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1496199 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1498715 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1500914 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1502411 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1504628 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1506142 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1516076 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1516736 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1517401 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1520198 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1520710 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1521413 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1526703 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1526828 00:33:09.422 Removing: /var/run/dpdk/spdk_pid1528516 00:33:09.423 Removing: /var/run/dpdk/spdk_pid1529054 00:33:09.423 Removing: /var/run/dpdk/spdk_pid1529079 00:33:09.423 Clean 00:33:09.423 10:21:48 -- common/autotest_common.sh@1451 -- # return 0 00:33:09.423 10:21:48 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:33:09.423 10:21:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:09.423 10:21:48 -- common/autotest_common.sh@10 -- # set +x 00:33:09.423 10:21:48 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:33:09.423 10:21:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:09.423 10:21:48 -- common/autotest_common.sh@10 -- # set +x 00:33:09.683 10:21:48 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:09.683 10:21:48 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:09.683 10:21:48 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:09.683 10:21:48 -- spdk/autotest.sh@395 -- # hash lcov 00:33:09.683 10:21:48 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:09.683 10:21:48 -- spdk/autotest.sh@397 -- # hostname 00:33:09.683 10:21:48 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:09.683 geninfo: WARNING: invalid characters removed from testname! 00:33:36.272 10:22:12 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:36.534 10:22:15 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:39.137 10:22:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:40.123 10:22:19 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:42.036 10:22:20 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:43.949 10:22:22 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:45.861 10:22:24 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:45.861 10:22:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:45.861 10:22:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:45.861 10:22:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:45.861 10:22:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:45.861 10:22:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.861 10:22:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.861 10:22:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.861 10:22:24 -- paths/export.sh@5 -- $ export PATH 00:33:45.861 10:22:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:45.861 10:22:24 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:45.861 10:22:24 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:45.861 10:22:24 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721895744.XXXXXX 00:33:45.861 10:22:24 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721895744.LDGxsN 00:33:45.861 10:22:24 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:45.861 10:22:24 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:45.861 10:22:24 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:45.861 10:22:24 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:45.861 10:22:24 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:45.861 10:22:24 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:45.861 10:22:24 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:45.861 10:22:24 -- common/autotest_common.sh@10 -- $ set +x 00:33:45.861 10:22:24 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:45.861 10:22:24 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:45.861 10:22:24 -- pm/common@17 -- $ local monitor 00:33:45.861 10:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:45.861 10:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:45.861 10:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:45.861 10:22:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:45.861 10:22:24 -- pm/common@25 -- $ sleep 1 00:33:45.861 10:22:24 -- pm/common@21 -- $ date +%s 00:33:45.861 10:22:24 -- pm/common@21 -- $ date +%s 00:33:45.861 10:22:24 -- pm/common@21 -- $ date +%s 00:33:45.861 10:22:24 -- pm/common@21 -- $ date +%s 00:33:45.862 10:22:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895744 00:33:45.862 10:22:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895744 00:33:45.862 10:22:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895744 00:33:45.862 10:22:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895744 00:33:45.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895744_collect-vmstat.pm.log 00:33:45.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895744_collect-cpu-load.pm.log 00:33:45.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895744_collect-cpu-temp.pm.log 00:33:45.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895744_collect-bmc-pm.bmc.pm.log 00:33:46.804 10:22:25 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:46.804 10:22:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:33:46.804 10:22:25 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:46.804 10:22:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:46.804 10:22:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:46.804 10:22:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:46.804 10:22:25 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:46.804 10:22:25 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:46.804 10:22:25 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:46.804 10:22:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:46.805 10:22:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:46.805 10:22:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:46.805 10:22:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:46.805 10:22:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.805 10:22:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:46.805 10:22:25 -- pm/common@44 -- $ pid=1541465 00:33:46.805 10:22:25 -- pm/common@50 -- $ kill -TERM 1541465 00:33:46.805 10:22:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.805 10:22:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:46.805 10:22:25 -- pm/common@44 -- $ pid=1541466 00:33:46.805 10:22:25 -- pm/common@50 -- $ kill -TERM 1541466 00:33:46.805 10:22:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.805 10:22:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:46.805 10:22:25 -- pm/common@44 -- $ pid=1541467 00:33:46.805 10:22:25 -- pm/common@50 -- $ kill -TERM 1541467 00:33:46.805 10:22:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:46.805 10:22:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:46.805 10:22:25 -- pm/common@44 -- $ pid=1541491 00:33:46.805 10:22:25 -- pm/common@50 -- $ sudo -E kill -TERM 1541491 00:33:46.805 + [[ -n 953822 ]] 00:33:46.805 + sudo kill 953822 00:33:46.816 [Pipeline] } 00:33:46.836 [Pipeline] // stage 00:33:46.842 [Pipeline] } 00:33:46.861 [Pipeline] // timeout 00:33:46.866 [Pipeline] } 00:33:46.884 [Pipeline] // catchError 00:33:46.889 [Pipeline] } 00:33:46.910 [Pipeline] // wrap 00:33:46.915 [Pipeline] } 00:33:46.929 [Pipeline] // catchError 00:33:46.937 [Pipeline] stage 00:33:46.938 [Pipeline] { (Epilogue) 00:33:46.951 [Pipeline] catchError 00:33:46.953 [Pipeline] { 00:33:46.966 [Pipeline] echo 00:33:46.967 Cleanup processes 00:33:46.971 [Pipeline] sh 00:33:47.260 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.260 1541576 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:47.260 1542014 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.273 [Pipeline] sh 00:33:47.559 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:47.559 ++ grep -v 'sudo pgrep' 00:33:47.559 ++ awk '{print $1}' 00:33:47.559 + sudo kill -9 1541576 00:33:47.572 [Pipeline] sh 00:33:47.858 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:00.108 [Pipeline] sh 00:34:00.397 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:00.397 Artifacts sizes are good 00:34:00.412 [Pipeline] archiveArtifacts 00:34:00.421 Archiving artifacts 00:34:00.642 [Pipeline] sh 00:34:00.930 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:00.944 [Pipeline] cleanWs 00:34:00.953 [WS-CLEANUP] Deleting project workspace... 00:34:00.953 [WS-CLEANUP] Deferred wipeout is used... 00:34:00.960 [WS-CLEANUP] done 00:34:00.962 [Pipeline] } 00:34:00.981 [Pipeline] // catchError 00:34:00.992 [Pipeline] sh 00:34:01.277 + logger -p user.info -t JENKINS-CI 00:34:01.287 [Pipeline] } 00:34:01.303 [Pipeline] // stage 00:34:01.308 [Pipeline] } 00:34:01.326 [Pipeline] // node 00:34:01.332 [Pipeline] End of Pipeline 00:34:01.364 Finished: SUCCESS